BPF programs can use the bpf_trace_printk() helper to print debug
information into the trace pipe. Add a subcommand
"bpftool prog tracelog" to simply dump this pipe to the console.
This is for a good part copied from iproute2, where the feature is
available with "tc exec bpf dbg". Changes include dumping pipe content
to stdout instead of stderr and adding JSON support (content is dumped
as an array of strings, one per line read from the pipe). This version
is dual-licensed, with Daniel's permission.
Cc: Daniel Borkmann <daniel@iogearbox.net>
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Ard Biesheuvel says:
====================
On arm64, modules are allocated from a 128 MB window which is close to
the core kernel, so that relative direct branches are guaranteed to be
in range (except in some KASLR configurations). Also, module_alloc()
is in charge of allocating KASAN shadow memory when running with KASAN
enabled.
This means that the way BPF reuses module_alloc()/module_memfree() is
undesirable on arm64 (and potentially other architectures as well),
and so this series refactors BPF's use of those functions to permit
architectures to change this behavior.
Patch #1 breaks out the module_alloc() and module_memfree() calls into
__weak functions so they can be overridden.
Patch #2 implements the new alloc/free overrides for arm64
Changes since v3:
- drop 'const' modifier for free() hook void* argument
- move the dedicated BPF region to before the module region, putting it
within 4GB of the module and kernel regions on non-KASLR kernels
Changes since v2:
- properly build time and runtime tested this time (log after the diffstat)
- create a dedicated 128 MB region at the top of the vmalloc space for BPF
programs, ensuring that the programs will be in branching range of each
other (which we currently rely upon) but at an arbitrary distance from
the kernel and modules (which we don't care about)
Changes since v1:
- Drop misguided attempt to 'fix' and refactor the free path. Instead,
just add another __weak wrapper for the invocation of module_memfree()
====================
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jann Horn <jannh@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The arm64 module region is a 128 MB region that is kept close to
the core kernel, in order to ensure that relative branches are
always in range. So using the same region for programs that do
not have this restriction is wasteful, and preferably avoided.
Now that the core BPF JIT code permits the alloc/free routines to
be overridden, implement them by vmalloc()/vfree() calls from a
dedicated 128 MB region set aside for BPF programs. This ensures
that BPF programs are still in branching range of each other, which
is something the JIT currently depends upon (and is not guaranteed
when using module_alloc() on KASLR kernels like we do currently).
It also ensures that placement of BPF programs does not correlate
with the placement of the core kernel or modules, making it less
likely that leaking the former will reveal the latter.
This also solves an issue under KASAN, where shadow memory is
needlessly allocated for all BPF programs (which don't require KASAN
shadow pages since they are not KASAN instrumented)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
By default, BPF uses module_alloc() to allocate executable memory,
but this is not necessary on all arches and potentially undesirable
on some of them.
So break out the module_alloc() and module_memfree() calls into __weak
functions to allow them to be overridden in arch code.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Lorenz Bauer says:
====================
Right now, there is no safe way to use BPF_PROG_TEST_RUN with data_out.
This is because bpf_test_finish copies the output buffer to user space
without checking its size. This can lead to the kernel overwriting
data in user space after the buffer if xdp_adjust_head and friends are
in play.
Thanks to everyone for their advice and patience with this patch set!
Changes in v5:
* Fix up libbpf.map
Changes in v4:
* Document bpf_prog_test_run and bpf_prog_test_run_xattr
* Use struct bpf_prog_test_run_attr for return values
Changes in v3:
* Introduce bpf_prog_test_run_xattr instead of modifying the existing
function
Changes in v2:
* Make the syscall return ENOSPC if data_size_out is too small
* Make bpf_prog_test_run return EINVAL if size_out is missing
* Document the new behaviour of data_size_out
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Make sure that bpf_prog_test_run_xattr returns the correct length
and that the kernel respects the output size hint. Also check
that errno indicates ENOSPC if there is a short output buffer given.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add a new function, which encourages safe usage of the test interface.
bpf_prog_test_run continues to work as before, but should be considered
unsafe.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Pull changes from "bpf: respect size hint to BPF_PROG_TEST_RUN if present".
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Use data_size_out as a size hint when copying test output to user space.
ENOSPC is returned if the output buffer is too small.
Callers which so far did not set data_size_out are not affected.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
When NULL pointer accidentally passed to write_kprobe_events,
due to strlen(NULL), segmentation fault happens.
Changed code returns -1 to deal with this situation.
Bug issued with Smatch, static analysis.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The missing indentation on the "Return" sections for bpf_map_pop_elem()
and bpf_map_peek_elem() helpers break RST and man pages generation. This
patch fixes them, and moves the description of those two helpers towards
the end of the list (even though they are somehow related to the three
first helpers for maps, the man page explicitly states that the helpers
are sorted in chronological order).
While at it, bring other minor formatting edits for eBPF helpers
documentation: mostly blank lines removal, RST formatting, or other
small nits for consistency.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The pkt_len field in qdisc_skb_cb stores the skb length as it will
appear on the wire after segmentation. For byte accounting, this value
is more accurate than skb->len. It is computed on entry to the TC
layer, so only valid there.
Allow read access to this field from BPF tc classifier and action
programs. The implementation is analogous to tc_classid, aside from
restricting to read access.
To distinguish it from skb->len and self-describe export as wire_len.
Changes v1->v2
- Rename pkt_len to wire_len
Signed-off-by: Petar Penkov <ppenkov@google.com>
Signed-off-by: Vlad Dumitrescu <vladum@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The whole libbpf is licensed as (LGPL-2.1 OR BSD-2-Clause). I missed it
while adding README.rst. Fix it and use same license as all other files
in libbpf do. Since I'm the only author of README.rst so far, no others'
permissions should be needed.
Fixes: 76d1b894c5 ("libbpf: Document API and ABI conventions")
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The aux->func_info and aux->btf are leaked in the error out cases
during bpf_prog_load(). This patch fixes it.
Fixes: ba64e7d852 ("bpf: btf: support proper non-jit func info")
Cc: Yonghong Song <yhs@fb.com>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Matteo Croce says:
====================
Small improvements to improve the readability and easiness
to use of the xdp1 sample.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Find the ifindex with if_nametoindex() instead of requiring the
numeric ifindex.
Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Store only the total packet count for every protocol, instead of the
whole per-cpu array.
Use bpf_map_get_next_key() to iterate the map, instead of looking up
all the protocols.
Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
David Miller says:
====================
On sparc64 a ton of test cases in test_verifier.c fail because
the memory accesses in the test case are unaligned (or cannot
be proven to be aligned by the verifier).
Perhaps we can eventually try to (carefully) modify each test case
which has this problem to not use unaligned accesses but:
1) That is delicate work.
2) The changes might not fully respect the original
intention of the testcase.
3) In some cases, such a transformation might not even
be feasible at all.
So add an "any alignment" flag to tell the verifier to forcefully
disable it's alignment checks completely.
test_verifier.c is then annotated to use this flag when necessary.
The presence of the flag in each test case is good documentation to
anyone who wants to actually tackle the job of eliminating the
unaligned memory accesses in the test cases.
I've also seen several weird things in test cases, like trying to
access __skb->mark in a packet buffer.
This gets rid of 104 test_verifier.c failures on sparc64.
Changes since v1:
1) Explain the new BPF_PROG_LOAD flag in easier to understand terms.
Suggested by Alexei.
2) Make bpf_verify_program() just take a __u32 prog_flags instead of
just accumulating boolean arguments over and over. Also suggested
by Alexei.
Changes since RFC:
1) Only the admin can allow the relaxation of alignment restrictions
on inefficient unaligned access architectures.
2) Use F_NEEDS_EFFICIENT_UNALIGNED_ACCESS instead of making a new
flag.
3) Annotate in the output, when we have a test case that the verifier
accepted but we did not try to execute because we are on an
inefficient unaligned access platform. Maybe with some arch
machinery we can avoid this in the future.
Signed-off-by: David S. Miller <davem@davemloft.net>
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
If a testcase has alignment problems but is expected to be ACCEPT,
verify it using F_NEEDS_EFFICIENT_UNALIGNED_ACCESS too.
Maybe in the future if we add some architecture specific code to elide
the unaligned memory access warnings during the test, we can execute
these as well.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Use F_NEEDS_EFFICIENT_UNALIGNED_ACCESS in more tests where the
expected result is REJECT.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Make it set the flag argument to bpf_verify_program() which will relax
the alignment restrictions.
Now all such test cases will go properly through the verifier even on
inefficient unaligned access architectures.
On inefficient unaligned access architectures do not try to run such
programs, instead mark the test case as passing but annotate the
result similarly to how it is done now in the presence of this flag.
So, we get complete full coverage for all REJECT test cases, and at
least verifier level coverage for ACCEPT test cases.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Often we want to write tests cases that check things like bad context
offset accesses. And one way to do this is to use an odd offset on,
for example, a 32-bit load.
This unfortunately triggers the alignment checks first on platforms
that do not set CONFIG_EFFICIENT_UNALIGNED_ACCESS. So the test
case see the alignment failure rather than what it was testing for.
It is often not completely possible to respect the original intention
of the test, or even test the same exact thing, while solving the
alignment issue.
Another option could have been to check the alignment after the
context and other validations are performed by the verifier, but
that is a non-trivial change to the verifier.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
The message got changed a lot time ago.
This was responsible for 36 test case failures on sparc64.
Fixes: f1174f77b5 ("bpf/verifier: rework value tracking")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Quentin Monnet says:
====================
Hi,
Several items for bpftool are included in this set: the first three patches
are fixes for bpftool itself and bash completion, while the last two
slightly improve the information obtained when dumping programs or maps, on
Daniel's suggestion. Please refer to individual commit logs for more
details.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
For prog array maps, the type of the owner program, and the JIT-ed state
of that program, are available from the file descriptor information
under /proc. Add them to "bpftool map show" output. Example output:
# bpftool map show
158225: prog_array name jmp_table flags 0x0
key 4B value 4B max_entries 8 memlock 4096B
owner_prog_type flow_dissector owner jited
# bpftool --json --pretty map show
[{
"id": 1337,
"type": "prog_array",
"name": "jmp_table",
"flags": 0,
"bytes_key": 4,
"bytes_value": 4,
"max_entries": 8,
"bytes_memlock": 4096,
"owner_prog_type": "flow_dissector",
"owner_jited": true
}
]
As we move the table used for associating names to program types,
complete it with the missing types (lwt_seg6local and sk_reuseport).
Also add missing types to the help message for "bpftool prog"
(sk_reuseport and flow_dissector).
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
In bpftool (plain) output for "bpftool prog show" or "bpftool map show",
an offloaded BPF object is simply denoted with "dev ifname", which is
not really explicit. Change it with something that clearly shows the
program is offloaded.
While at it also add an additional space, as done between other
information fields.
Example output, before:
# bpftool prog show
1337: xdp tag a04f5eef06a7f555 dev foo
loaded_at 2018-10-19T16:40:36+0100 uid 0
xlated 16B not jited memlock 4096B
After:
# bpftool prog show
1337: xdp tag a04f5eef06a7f555 offloaded_to foo
loaded_at 2018-10-19T16:40:36+0100 uid 0
xlated 16B not jited memlock 4096B
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Commit 197c2dac74 ("bpf: Add BPF_MAP_TYPE_QUEUE and BPF_MAP_TYPE_STACK
to bpftool-map") added support for queue and stack eBPF map types in
bpftool map handling. Let's update the bash completion accordingly.
Fixes: 197c2dac74 ("bpf: Add BPF_MAP_TYPE_QUEUE and BPF_MAP_TYPE_STACK to bpftool-map")
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Fix bash completion for "bpftool prog (attach|detach) PROG TYPE MAP" so
that the list of indices proposed for MAP are map indices, and not PROG
indices. Also use variables for map and prog reference types ("id",
"pinned", and "tag" for programs).
Fixes: b7d3826c2e ("bpf: bpftool, add support for attaching programs to maps")
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
The getpid() function is called in a couple of places in bpftool to
craft links of the shape "/proc/<pid>/...". Instead, it is possible to
use the "/proc/self/" shortcut, which makes things a bit easier, in
particular in jit_disasm.c.
Do the replacement, and remove the includes of <sys/types.h> from the
relevant files, now we do not use getpid() anymore.
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
On arm64, all executable code is guaranteed to reside in the vmalloc
space (or the module space), and so jump targets will only use 48
bits at most, and the remaining bits are guaranteed to be 0x1.
This means we can generate an immediate jump address using a sequence
of one MOVN (move wide negated) and two MOVK instructions, where the
first one sets the lower 16 bits but also sets all top bits to 0x1.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann says:
====================
bpf-next 2018-11-30
The following pull-request contains BPF updates for your *net-next* tree.
(Getting out bit earlier this time to pull in a dependency from bpf.)
The main changes are:
1) Add libbpf ABI versioning and document API naming conventions
as well as ABI versioning process, from Andrey.
2) Add a new sk_msg_pop_data() helper for sk_msg based BPF
programs that is used in conjunction with sk_msg_push_data()
for adding / removing meta data to the msg data, from John.
3) Optimize convert_bpf_ld_abs() for 0 offset and fix various
lib and testsuite build failures on 32 bit, from David.
4) Make BPF prog dump for !JIT identical to how we dump subprogs
when JIT is in use, from Yonghong.
5) Rename btf_get_from_id() to make it more conform with libbpf
API naming conventions, from Martin.
6) Add a missing BPF kselftest config item, from Naresh.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
During porting libbpf to bcc, I got some warnings like below:
...
[ 2%] Building C object src/cc/CMakeFiles/bpf-shared.dir/libbpf/src/libbpf.c.o
/home/yhs/work/bcc2/src/cc/libbpf/src/libbpf.c:12:0:
warning: "_GNU_SOURCE" redefined [enabled by default]
#define _GNU_SOURCE
...
[ 3%] Building C object src/cc/CMakeFiles/bpf-shared.dir/libbpf/src/libbpf_errno.c.o
/home/yhs/work/bcc2/src/cc/libbpf/src/libbpf_errno.c: In function ‘libbpf_strerror’:
/home/yhs/work/bcc2/src/cc/libbpf/src/libbpf_errno.c:45:7:
warning: assignment makes integer from pointer without a cast [enabled by default]
ret = strerror_r(err, buf, size);
...
bcc is built with _GNU_SOURCE defined and this caused the above warning.
This patch intends to make libpf _GNU_SOURCE friendly by
. define _GNU_SOURCE in libbpf.c unless it is not defined
. undefine _GNU_SOURCE as non-gnu version of strerror_r is expected.
Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
We can remove the loop and conditional branches
and compute wscale efficiently thanks to ilog2()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
skb_linearization can fail due to memory allocation failure.
In such a case, the driver will drop the packet. In such a case
The driver used to print an error message.
This patch replaces this error message by a dedicated statistic.
Signed-off-by: Michael Shteinbok <michael.shteinbok@cavium.com>
Signed-off-by: Ariel Elior <ariel.elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add comments in the switch statement for XDP action to indicate
fallthrough is intended.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kunihiko Hayashi says:
====================
Add suspend/resume support for AVE ethernet driver
This series adds support for suspend/resume to AVE ethernet driver.
And to avoid the error that wol state of phy hardware is enabled by default,
this sets initial wol state to disabled and add preservation the state in
suspend/resume sequence.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the wol state forces to be initialized after reset, the state should
be preserved in suspend/resume sequence.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If wol state of phy hardware is enabled after reset, phy_ethtool_get_wol()
returns that wol.wolopts is true.
However, since net_device.wol_enabled is zero and this doesn't apply wol
state until calling ethtool_set_wol(), so mdio_bus_phy_may_suspend()
returns true, that is, it's in a state where phy can suspend even though
wol state is enabled.
In this inconsistency, phy_suspend() returns -EBUSY, and at last,
suspend sequence fails with the following message:
dpm_run_callback(): mdio_bus_phy_suspend+0x0/0x58 returns -16
PM: Device 65000000.ethernet-ffffffff:01 failed to suspend: error -16
PM: Some devices failed to suspend, or early wake event detected
In order to fix the above issue, this patch forces to set initial wol state
to disabled as default.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch introduces suspend and resume functions to ave driver.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
iQFHBAABCgAxFiEENrCndlB/VnAEWuH5k9IU1zQoZfEFAlv+uhcTHG1rbEBwZW5n
dXRyb25peC5kZQAKCRCT0hTXNChl8Q67B/4k/6WVMEssFStSEpC9asrqQs0Itra3
+eIiXrzGfpzO/Gg1O4LV2xUt7JzgLgRa8BBPH8wk+Q/QMOYZsP0qkocb2SWI+J31
JfJGa50l+Beyj8UdeLqZRt62cKCQIBXRaZ4O12ugGYNeQotFsSg4bFa1Gk9aW8Ke
YZT2zYTgD1yaQpBnUcbwnzJEf58vVjgH8zocpm40hlqsUKOJeHoAqUfyfj5RPioX
w9HAc/nDmsHAt32O650poJ851RK2dMNDTSKtUxKTH+nPI7HzdB5HXvlPFpHAf2SC
BrPYUJIOJGPLzhfTRTwAFCfM5C8/y/FtR0GD9vEctVDjSIrcH72aOnre
=NA7B
-----END PGP SIGNATURE-----
Merge tag 'linux-can-next-for-4.21-20181128' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
This is a pull request for net-next/master consisting of 18 patches.
The first patch is by Colin Ian King and fixes the spelling in the ucan
driver.
The next three patches target the xilinx driver. YueHaibing's patch
fixes the return type of ndo_start_xmit function. Two patches by
Shubhrajyoti Datta add support for the CAN FD 2.0 controllers.
Flavio Suligoi's patch for the sja1000 driver add support for the ASEM
CAN raw hardware.
Wolfram Sang's and Kuninori Morimoto's patches switch the rcar driver to
use SPDX license identifiers.
The remaining 111 patches improve the flexcan driver. Pankaj Bansal's
patch enables the driver in Kconfig on all architectures with IOMEM
support. The next four patches by me fix indention, add missing
parentheses and comments. Aisheng Dong's patches add self wake support
and document it in the DT bindings. The remaining patches by Pankaj
Bansal first fix the loopback support and prepare the driver for the
CAN-FD support needed for the LX2160A SoC. The actual CAN-FD support
will be added in a later patch series.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Trivial conflict in net/core/filter.c, a locally computed
'sdif' is now an argument to the function.
Signed-off-by: David S. Miller <davem@davemloft.net>
Cannot cast a u64 to a pointer on 32-bit without an intervening (long)
cast otherwise GCC warns.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
CONFIG_FTRACE_SYSCALLS=y is required for get_cgroup_id_user test case
this test reads a file from debug trace path
/sys/kernel/debug/tracing/events/syscalls/sys_enter_nanosleep/id
Signed-off-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
John Fastabend says:
====================
After being able to add metadata to messages with sk_msg_push_data we
have also found it useful to be able to "pop" this metadata off before
sending it to applications in some cases. This series adds a new helper
sk_msg_pop_data() and the associated patches to add tests and tools/lib
support.
Thanks!
v2: Daniel caught that we missed adding sk_msg_pop_data to the changes
data helper so that the verifier ensures BPF programs revalidate
data after using this helper. Also improve documentation adding a
return description and using RST syntax per Quentin's comment. And
delta calculations for DROP with pop'd data (albeit a strange set
of operations for a program to be doing) had potential to be
incorrect possibly confusing user space applications, so fix it.
====================
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Similar to msg_pull_data and msg_push_data add a set of options to
have msg_pop_data() exercised.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Add the necessary header definitions to tools for new
msg_pop_data_helper.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
This adds a BPF SK_MSG program helper so that we can pop data from a
msg. We use this to pop metadata from a previous push data call.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Pull networking fixes from David Miller:
1) ARM64 JIT fixes for subprog handling from Daniel Borkmann.
2) Various sparc64 JIT bug fixes (fused branch convergance, frame
pointer usage detection logic, PSEODU call argument handling).
3) Fix to use BH locking in nf_conncount, from Taehee Yoo.
4) Fix race of TX skb freeing in ipheth driver, from Bernd Eckstein.
5) Handle return value of TX NAPI completion properly in lan743x
driver, from Bryan Whitehead.
6) MAC filter deletion in i40e driver clears wrong state bit, from
Lihong Yang.
7) Fix use after free in rionet driver, from Pan Bian.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (53 commits)
s390/qeth: fix length check in SNMP processing
net: hisilicon: remove unexpected free_netdev
rapidio/rionet: do not free skb before reading its length
i40e: fix kerneldoc for xsk methods
ixgbe: recognize 1000BaseLX SFP modules as 1Gbps
i40e: Fix deletion of MAC filters
igb: fix uninitialized variables
netfilter: nf_tables: deactivate expressions in rule replecement routine
lan743x: Enable driver to work with LAN7431
tipc: fix lockdep warning during node delete
lan743x: fix return value for lan743x_tx_napi_poll
net: via: via-velocity: fix spelling mistake "alignement" -> "alignment"
qed: fix spelling mistake "attnetion" -> "attention"
net: thunderx: fix NULL pointer dereference in nic_remove
sctp: increase sk_wmem_alloc when head->truesize is increased
firestream: fix spelling mistake: "Inititing" -> "Initializing"
net: phy: add workaround for issue where PHY driver doesn't bind to the device
usbnet: ipheth: fix potential recvmsg bug and recvmsg bug 2
sparc: Adjust bpf JIT prologue for PSEUDO calls.
bpf, doc: add entries of who looks over which jits
...