The verifier errors around stack accesses have changed slightly in the
previous commit (generally for the better).
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-3-andreimatei1@gmail.com
Instead of using shared global variables between userspace and BPF, use
the ring buffer to send the IMA hash on the BPF ring buffer. This helps
in validating both IMA and the usage of the ringbuffer in sleepable
programs.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204193622.3367275-3-kpsingh@kernel.org
Add a short note to make contributors aware of the existence of the
script. The documentation does not intentionally document all the
options of the script to avoid mentioning it in two places (it's
available in the usage / help message of the script).
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204194544.3383814-3-kpsingh@kernel.org
The script runs the BPF selftests locally on the same kernel image
as they would run post submit in the BPF continuous integration
framework.
The goal of the script is to allow contributors to run selftests locally
in the same environment to check if their changes would end up breaking
the BPF CI and reduce the back-and-forth between the maintainers and the
developers.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/bpf/20210204194544.3383814-2-kpsingh@kernel.org
This patch adds support to verifier tests to check for a succession of
verifier log messages on program load failure. This makes the errstr
field work uniformly across REJECT and VERBOSE_ACCEPT checks.
This patch also increases the maximum size of a message in the series of
messages to test from 80 chars to 200 chars. This is in order to keep
existing tests working, which sometimes test for messages larger than 80
chars (which was accepted in the REJECT case, when testing for a single
message, but not in the VERBOSE_ACCEPT case, when testing for possibly
multiple messages).
And example of such a long, checked message is in bounds.c: "R1 has
unknown scalar with mixed signed bounds, pointer arithmetic with it
prohibited for !root"
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210130220150.59305-1-andreimatei1@gmail.com
Some compilers trigger a warning when tmp_dir_path is allocated
with a fixed size of 64-bytes and used in the following snprintf:
snprintf(tmp_exec_path, sizeof(tmp_exec_path), "%s/copy_of_rm",
tmp_dir_path);
warning: ‘/copy_of_rm’ directive output may be truncated writing 11
bytes into a region of size between 1 and 64 [-Wformat-truncation=]
This is because it assumes that tmp_dir_path can be a maximum of 64
bytes long and, therefore, the end-result can get truncated. Fix it by
not using a fixed size in the initialization of tmp_dir_path which
allows the compiler to track actual size of the array better.
Fixes: 2f94ac1918 ("bpf: Update local storage test to check handling of null ptrs")
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210202213730.1906931-1-kpsingh@kernel.org
When BPF_FETCH is set, atomic instructions load a value from memory
into a register. The current verifier code first checks via
check_mem_access whether we can access the memory, and then checks
via check_reg_arg whether we can write into the register.
For loads, check_reg_arg has the side-effect of marking the
register's value as unkonwn, and check_mem_access has the side effect
of propagating bounds from memory to the register. This currently only
takes effect for stack memory.
Therefore with the current order, bounds information is thrown away,
but by simply reversing the order of check_reg_arg
vs. check_mem_access, we can instead propagate bounds smartly.
A simple test is added with an infinite loop that can only be proved
unreachable if this propagation is present. This is implemented both
with C and directly in test_verifier using assembly.
Suggested-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210202135002.4024825-1-jackmanb@google.com
Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on a locked socket.
Note that we could remove the switch for prog->expected_attach_type altogether
since all current sock_addr attach types are covered. However, it makes sense
to keep it as a safe-guard in case new sock_addr attach types are added that
might not operate on a locked socket. Therefore, avoid to let this slip through.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-5-sdf@google.com
Can be used to query/modify socket state for unconnected UDP sendmsg.
Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on
a locked socket.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-2-sdf@google.com
When dealing with BPF/BTF/pahole and DWARF v5 I wanted to build bpftool.
While looking into the source code I found duplicate assignments in misc tools
for the LLVM eco system, e.g. clang and llvm-objcopy.
Move the Clang, LLC and/or LLVM utils definitions to tools/scripts/Makefile.include
file and add missing includes where needed. Honestly, I was inspired by the commit
c8a950d0d3 ("tools: Factor HOSTCC, HOSTLD, HOSTAR definitions").
I tested with bpftool and perf on Debian/testing AMD64 and LLVM/Clang v11.1.0-rc1.
Build instructions:
[ make and make-options ]
MAKE="make V=1"
MAKE_OPTS="HOSTCC=clang HOSTCXX=clang++ HOSTLD=ld.lld CC=clang LD=ld.lld LLVM=1 LLVM_IAS=1"
MAKE_OPTS="$MAKE_OPTS PAHOLE=/opt/pahole/bin/pahole"
[ clean-up ]
$MAKE $MAKE_OPTS -C tools/ clean
[ bpftool ]
$MAKE $MAKE_OPTS -C tools/bpf/bpftool/
[ perf ]
PYTHON=python3 $MAKE $MAKE_OPTS -C tools/perf/
I was careful with respecting the user's wish to override custom compiler, linker,
GNU/binutils and/or LLVM utils settings.
Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com> # tools/build and tools/perf
Link: https://lore.kernel.org/bpf/20210128015117.20515-1-sedat.dilek@gmail.com
Return 3 to indicate that permission check for port 111
should be skipped.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210127193140.3170382-2-sdf@google.com
Fix bug in handling bpf_testmod unloading that will cause test_progs exiting
prematurely if bpf_testmod unloading failed. This is especially problematic
when running a subset of test_progs that doesn't require root permissions and
doesn't rely on bpf_testmod, yet will fail immediately due to exit(1) in
unload_bpf_testmod().
Fixes: 9f7fa22589 ("selftests/bpf: Add bpf_testmod kernel module for testing")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210126065019.1268027-1-andrii@kernel.org
Let us use a local variable in nsswitchthread(), so we can remove a
lot of casting for better readability.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-7-bjorn.topel@gmail.com
Instead of passing void * all over the place, let us pass the actual
type (ifobject) and remove the void-ptr-to-type-ptr casting.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-2-bjorn.topel@gmail.com
Add custom implementation of getsockopt hook for TCP_ZEROCOPY_RECEIVE.
We skip generic hooks for TCP_ZEROCOPY_RECEIVE and have a custom
call in do_tcp_getsockopt using the on-stack data. This removes
3% overhead for locking/unlocking the socket.
Without this patch:
3.38% 0.07% tcp_mmap [kernel.kallsyms] [k] __cgroup_bpf_run_filter_getsockopt
|
--3.30%--__cgroup_bpf_run_filter_getsockopt
|
--0.81%--__kmalloc
With the patch applied:
0.52% 0.12% tcp_mmap [kernel.kallsyms] [k] __cgroup_bpf_run_filter_getsockopt_kern
Note, exporting uapi/tcp.h requires removing netinet/tcp.h
from test_progs.h because those headers have confliciting
definitions.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210115163501.805133-2-sdf@google.com
llvm patch https://reviews.llvm.org/D84002 permitted
to emit empty rodata datasec if the elf .rodata section
contains read-only data from local variables. These
local variables will be not emitted as BTF_KIND_VARs
since llvm converted these local variables as
static variables with private linkage without debuginfo
types. Such an empty rodata datasec will make
skeleton code generation easy since for skeleton
a rodata struct will be generated if there is a
.rodata elf section. The existence of a rodata
btf datasec is also consistent with the existence
of a rodata map created by libbpf.
The btf with such an empty rodata datasec will fail
in the kernel though as kernel will reject a datasec
with zero vlen and zero size. For example, for the below code,
int sys_enter(void *ctx)
{
int fmt[6] = {1, 2, 3, 4, 5, 6};
int dst[6];
bpf_probe_read(dst, sizeof(dst), fmt);
return 0;
}
We got the below btf (bpftool btf dump ./test.o):
[1] PTR '(anon)' type_id=0
[2] FUNC_PROTO '(anon)' ret_type_id=3 vlen=1
'ctx' type_id=1
[3] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
[4] FUNC 'sys_enter' type_id=2 linkage=global
[5] INT 'char' size=1 bits_offset=0 nr_bits=8 encoding=SIGNED
[6] ARRAY '(anon)' type_id=5 index_type_id=7 nr_elems=4
[7] INT '__ARRAY_SIZE_TYPE__' size=4 bits_offset=0 nr_bits=32 encoding=(none)
[8] VAR '_license' type_id=6, linkage=global-alloc
[9] DATASEC '.rodata' size=0 vlen=0
[10] DATASEC 'license' size=0 vlen=1
type_id=8 offset=0 size=4
When loading the ./test.o to the kernel with bpftool,
we see the following error:
libbpf: Error loading BTF: Invalid argument(22)
libbpf: magic: 0xeb9f
...
[6] ARRAY (anon) type_id=5 index_type_id=7 nr_elems=4
[7] INT __ARRAY_SIZE_TYPE__ size=4 bits_offset=0 nr_bits=32 encoding=(none)
[8] VAR _license type_id=6 linkage=1
[9] DATASEC .rodata size=24 vlen=0 vlen == 0
libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring.
Basically, libbpf changed .rodata datasec size to 24 since elf .rodata
section size is 24. The kernel then rejected the BTF since vlen = 0.
Note that the above kernel verifier failure can be worked around with
changing local variable "fmt" to a static or global, optionally const, variable.
This patch permits a datasec with vlen = 0 in kernel.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119153519.3901963-1-yhs@fb.com
Reuse module_attach infrastructure to add a new bare tracepoint to check
we can attach to it as a raw tracepoint.
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210119122237.2426878-3-qais.yousef@arm.com
There are 3 tests added into verifier's jit tests to trigger x64
jit jump padding.
The first test can be represented as the following assembly code:
1: bpf_call bpf_get_prandom_u32
2: if r0 == 1 goto pc+128
3: if r0 == 2 goto pc+128
...
129: if r0 == 128 goto pc+128
130: goto pc+128
131: goto pc+127
...
256: goto pc+2
257: goto pc+1
258: r0 = 1
259: ret
We first store a random number to r0 and add the corresponding
conditional jumps (2~129) to make verifier believe that those jump
instructions from 130 to 257 are reachable. When the program is sent to
x64 jit, it starts to optimize out the NOP jumps backwards from 257.
Since there are 128 such jumps, the program easily reaches 15 passes and
triggers jump padding.
Here is the x64 jit code of the first test:
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 66 90 xchg ax,ax
7: 55 push rbp
8: 48 89 e5 mov rbp,rsp
b: e8 4c 90 75 e3 call 0xffffffffe375905c
10: 48 83 f8 01 cmp rax,0x1
14: 0f 84 fe 04 00 00 je 0x518
1a: 48 83 f8 02 cmp rax,0x2
1e: 0f 84 f9 04 00 00 je 0x51d
...
f6: 48 83 f8 18 cmp rax,0x18
fa: 0f 84 8b 04 00 00 je 0x58b
100: 48 83 f8 19 cmp rax,0x19
104: 0f 84 86 04 00 00 je 0x590
10a: 48 83 f8 1a cmp rax,0x1a
10e: 0f 84 81 04 00 00 je 0x595
...
500: 0f 84 83 01 00 00 je 0x689
506: 48 81 f8 80 00 00 00 cmp rax,0x80
50d: 0f 84 76 01 00 00 je 0x689
513: e9 71 01 00 00 jmp 0x689
518: e9 6c 01 00 00 jmp 0x689
...
5fe: e9 86 00 00 00 jmp 0x689
603: e9 81 00 00 00 jmp 0x689
608: 0f 1f 00 nop DWORD PTR [rax]
60b: eb 7c jmp 0x689
60d: eb 7a jmp 0x689
...
683: eb 04 jmp 0x689
685: eb 02 jmp 0x689
687: 66 90 xchg ax,ax
689: b8 01 00 00 00 mov eax,0x1
68e: c9 leave
68f: c3 ret
As expected, a 3 bytes NOPs is inserted at 608 due to the transition
from imm32 jmp to imm8 jmp. A 2 bytes NOPs is also inserted at 687 to
replace a NOP jump.
The second test case is tricky. Here is the assembly code:
1: bpf_call bpf_get_prandom_u32
2: if r0 == 1 goto pc+2048
3: if r0 == 2 goto pc+2048
...
2049: if r0 == 2048 goto pc+2048
2050: goto pc+2048
2051: goto pc+16
2052: goto pc+15
...
2064: goto pc+3
2065: goto pc+2
2066: goto pc+1
...
[repeat "goto pc+16".."goto pc+1" 127 times]
...
4099: r0 = 2
4100: ret
There are 4 major parts of the program.
1) 1~2049: Those are instructions to make 2050~4098 reachable. Some of
them also could generate the padding for jmp_cond.
2) 2050: This is the target instruction for the imm32 nop jmp padding.
3) 2051~4098: The repeated "goto 1~16" instructions are designed to be
consumed by the nop jmp optimization. In the end, those
instrucitons become 128 continuous 0 offset jmp and are
optimized out in 1 pass, and this make insn 2050 an imm32
nop jmp in the next pass, so that we can trigger the
5 bytes padding.
4) 4099~4100: Those are the instructions to end the program.
The x64 jit code is like this:
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 66 90 xchg ax,ax
7: 55 push rbp
8: 48 89 e5 mov rbp,rsp
b: e8 bc 7b d5 d3 call 0xffffffffd3d57bcc
10: 48 83 f8 01 cmp rax,0x1
14: 0f 84 7e 66 00 00 je 0x6698
1a: 48 83 f8 02 cmp rax,0x2
1e: 0f 84 74 66 00 00 je 0x6698
24: 48 83 f8 03 cmp rax,0x3
28: 0f 84 6a 66 00 00 je 0x6698
2e: 48 83 f8 04 cmp rax,0x4
32: 0f 84 60 66 00 00 je 0x6698
38: 48 83 f8 05 cmp rax,0x5
3c: 0f 84 56 66 00 00 je 0x6698
42: 48 83 f8 06 cmp rax,0x6
46: 0f 84 4c 66 00 00 je 0x6698
...
666c: 48 81 f8 fe 07 00 00 cmp rax,0x7fe
6673: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6677: 74 1f je 0x6698
6679: 48 81 f8 ff 07 00 00 cmp rax,0x7ff
6680: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6684: 74 12 je 0x6698
6686: 48 81 f8 00 08 00 00 cmp rax,0x800
668d: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6691: 74 05 je 0x6698
6693: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
6698: b8 02 00 00 00 mov eax,0x2
669d: c9 leave
669e: c3 ret
Since insn 2051~4098 are optimized out right before the padding pass,
there are several conditional jumps from the first part are replaced with
imm8 jmp_cond, and this triggers the 4 bytes padding, for example at
6673, 6680, and 668d. On the other hand, Insn 2050 is replaced with the
5 bytes nops at 6693.
The third test is to invoke the first and second tests as subprogs to test
bpf2bpf. Per the system log, there was one more jit happened with only
one pass and the same jit code was produced.
v4:
- Add the second test case which triggers jmp_cond padding and imm32 nop
jmp padding.
- Add the new test case as another subprog
Signed-off-by: Gary Lin <glin@suse.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119102501.511-4-glin@suse.com
Currently tests for bpf_get_ns_current_pid_tgid() are outside test_progs.
This change folds test cases into test_progs.
Changes from v11:
- Fixed test failure is not detected.
- Removed EXIT(3) call as it will stop test_progs execution.
Signed-off-by: Carlos Neira <cneirabustos@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210114141033.GA17348@localhost
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Conflicts:
drivers/net/can/dev.c
commit 03f16c5075 ("can: dev: can_restart: fix use after free bug")
commit 3e77f70e73 ("can: dev: move driver related infrastructure into separate subdir")
Code move.
drivers/net/dsa/b53/b53_common.c
commit 8e4052c32d ("net: dsa: b53: fix an off by one in checking "vlan->vid"")
commit b7a9e0da2d ("net: switchdev: remove vid_begin -> vid_end range from VLAN objects")
Field rename.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
and can trees.
Current release - regressions:
- nfc: nci: fix the wrong NCI_CORE_INIT parameters
Current release - new code bugs:
- bpf: allow empty module BTFs
Previous releases - regressions:
- bpf: fix signed_{sub,add32}_overflows type handling
- tcp: do not mess with cloned skbs in tcp_add_backlog()
- bpf: prevent double bpf_prog_put call from bpf_tracing_prog_attach
- bpf: don't leak memory in bpf getsockopt when optlen == 0
- tcp: fix potential use-after-free due to double kfree()
- mac80211: fix encryption issues with WEP
- devlink: use right genl user_ptr when handling port param get/set
- ipv6: set multicast flag on the multicast route
- tcp: fix TCP_USER_TIMEOUT with zero window
Previous releases - always broken:
- bpf: local storage helpers should check nullness of owner ptr passed
- mac80211: fix incorrect strlen of .write in debugfs
- cls_flower: call nla_ok() before nla_next()
- skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmAIa+UACgkQMUZtbf5S
IruZTQ/+O263ZyI0C5S1uCbHPCsAyjZyxECWDNfQ3tRzTfvldoRRP4YbC1ekSoXu
8Y9GKDDLMI2pYkNlCqfMhrFaop8sudosntOZDSeRm/2TkkQFnkM/bxAlz++7Rnwx
vHu1Xo2t2bKJxooSw8gLJ5iZNTbkw/M5iA3qR9kP+BG1yDP7By4P/Y4ziFphffad
gPlfLQaU8nRVuDBYYrGIX0GoMg05IH1zt2/MxvN4ReXuex/9tq2TrU8jxHiwT2ja
K1DHR+g2VVZf55TWrL9Yw8V5Rr+F7bxf6i+yer9hWWhENXgoTv6QkndAnTFOcoat
VQh44GzoNoL1dAHD8kyUOOxJCyjItJJe58Evcwjnls4o+5BC2aDNQADwrSyz3sHe
l9iNMSMEylymu7Xu+cJw2kjOq/BK6TdjaGSxwm1M2ErPehf36eJuc4FkaJz3RO55
nkYMfm0+5rYWSsR5CTTJp8r2urCAT4SSx1iLoZknUXE6qa5AcMSNhIjGbw6pUp4q
RDBtAKqiV0l37vdUag4Z+QgjPA0cH9E4aMQKYmD9dop20Zuzp4ug38qR32aEFC6q
Qfb0VBMKgwu6OWjuWARbwYktVQNcoelKiGnsGnORJ5S9cyc1N4HeKEnb5Hw8ky5q
4FBpNMfx3Ief14iNkh65KrzA+uyZBjqEG+joTSzn+9R7Lof60QA=
=KyY7
-----END PGP SIGNATURE-----
Merge tag 'net-5.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Networking fixes for 5.11-rc5, including fixes from bpf, wireless, and
can trees.
Current release - regressions:
- nfc: nci: fix the wrong NCI_CORE_INIT parameters
Current release - new code bugs:
- bpf: allow empty module BTFs
Previous releases - regressions:
- bpf: fix signed_{sub,add32}_overflows type handling
- tcp: do not mess with cloned skbs in tcp_add_backlog()
- bpf: prevent double bpf_prog_put call from bpf_tracing_prog_attach
- bpf: don't leak memory in bpf getsockopt when optlen == 0
- tcp: fix potential use-after-free due to double kfree()
- mac80211: fix encryption issues with WEP
- devlink: use right genl user_ptr when handling port param get/set
- ipv6: set multicast flag on the multicast route
- tcp: fix TCP_USER_TIMEOUT with zero window
Previous releases - always broken:
- bpf: local storage helpers should check nullness of owner ptr passed
- mac80211: fix incorrect strlen of .write in debugfs
- cls_flower: call nla_ok() before nla_next()
- skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too"
* tag 'net-5.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (52 commits)
net: systemport: free dev before on error path
net: usb: cdc_ncm: don't spew notifications
net: mscc: ocelot: Fix multicast to the CPU port
tcp: Fix potential use-after-free due to double kfree()
bpf: Fix signed_{sub,add32}_overflows type handling
can: peak_usb: fix use after free bugs
can: vxcan: vxcan_xmit: fix use after free bug
can: dev: can_restart: fix use after free bug
tcp: fix TCP socket rehash stats mis-accounting
net: dsa: b53: fix an off by one in checking "vlan->vid"
tcp: do not mess with cloned skbs in tcp_add_backlog()
selftests: net: fib_tests: remove duplicate log test
net: nfc: nci: fix the wrong NCI_CORE_INIT parameters
sh_eth: Fix power down vs. is_opened flag ordering
net: Disable NETIF_F_HW_TLS_RX when RXCSUM is disabled
netfilter: rpfilter: mask ecn bits before fib lookup
udp: mask TOS bits in udp_v4_early_demux()
xsk: Clear pool even for inactive queues
bpf: Fix helper bpf_map_peek_elem_proto pointing to wrong callback
sh_eth: Make PHY access aware of Runtime PM to fix reboot crash
...
The previous test added an address with a specified metric and check if
correspond route was created. I somehow added two logs for the same
test. Remove the duplicated one.
Reported-by: Antoine Tenart <atenart@redhat.com>
Fixes: 0d29169a70 ("selftests/net/fib_tests: update addr_metric_test for peer route testing")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Link: https://lore.kernel.org/r/20210119025930.2810532-1-liuhangbin@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Daniel Borkmann says:
====================
pull-request: bpf-next 2021-01-16
1) Extend atomic operations to the BPF instruction set along with x86-64 JIT support,
that is, atomic{,64}_{xchg,cmpxchg,fetch_{add,and,or,xor}}, from Brendan Jackman.
2) Add support for using kernel module global variables (__ksym externs in BPF
programs) retrieved via module's BTF, from Andrii Nakryiko.
3) Generalize BPF stackmap's buildid retrieval and add support to have buildid
stored in mmap2 event for perf, from Jiri Olsa.
4) Various fixes for cross-building BPF sefltests out-of-tree which then will
unblock wider automated testing on ARM hardware, from Jean-Philippe Brucker.
5) Allow to retrieve SOL_SOCKET opts from sock_addr progs, from Daniel Borkmann.
6) Clean up driver's XDP buffer init and split into two helpers to init per-
descriptor and non-changing fields during processing, from Lorenzo Bianconi.
7) Minor misc improvements to libbpf & bpftool, from Ian Rogers.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (41 commits)
perf: Add build id data in mmap2 event
bpf: Add size arg to build_id_parse function
bpf: Move stack_map_get_build_id into lib
bpf: Document new atomic instructions
bpf: Add tests for new BPF atomic operations
bpf: Add bitwise atomic instructions
bpf: Pull out a macro for interpreting atomic ALU operations
bpf: Add instructions for atomic_[cmp]xchg
bpf: Add BPF_FETCH field / create atomic_fetch_add instruction
bpf: Move BPF_STX reserved field check into BPF_STX verifier code
bpf: Rename BPF_XADD and prepare to encode other atomics in .imm
bpf: x86: Factor out a lookup table for some ALU opcodes
bpf: x86: Factor out emission of REX byte
bpf: x86: Factor out emission of ModR/M for *(reg + off)
tools/bpftool: Add -Wall when building BPF programs
bpf, libbpf: Avoid unused function warning on bpf_tail_call_static
selftests/bpf: Install btf_dump test cases
selftests/bpf: Fix installation of urandom_read
selftests/bpf: Move generated test files to $(TEST_GEN_FILES)
selftests/bpf: Fix out-of-tree build
...
====================
Link: https://lore.kernel.org/r/20210116012922.17823-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Daniel Borkmann says:
====================
pull-request: bpf 2021-01-16
1) Fix a double bpf_prog_put() for BPF_PROG_{TYPE_EXT,TYPE_TRACING} types in
link creation's error path causing a refcount underflow, from Jiri Olsa.
2) Fix BTF validation errors for the case where kernel modules don't declare
any new types and end up with an empty BTF, from Andrii Nakryiko.
3) Fix BPF local storage helpers to first check their {task,inode} owners for
being NULL before access, from KP Singh.
4) Fix a memory leak in BPF setsockopt handling for the case where optlen is
zero and thus temporary optval buffer should be freed, from Stanislav Fomichev.
5) Fix a syzbot memory allocation splat in BPF_PROG_TEST_RUN infra for
raw_tracepoint caused by too big ctx_size_in, from Song Liu.
6) Fix LLVM code generation issues with verifier where PTR_TO_MEM{,_OR_NULL}
registers were spilled to stack but not recognized, from Gilad Reti.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
MAINTAINERS: Update my email address
selftests/bpf: Add verifier test for PTR_TO_MEM spill
bpf: Support PTR_TO_MEM{,_OR_NULL} register spilling
bpf: Reject too big ctx_size_in for raw_tp test run
libbpf: Allow loading empty BTFs
bpf: Allow empty module BTFs
bpf: Don't leak memory in bpf getsockopt when optlen == 0
bpf: Update local storage test to check handling of null ptrs
bpf: Fix typo in bpf_inode_storage.c
bpf: Local storage helpers should check nullness of owner ptr passed
bpf: Prevent double bpf_prog_put call from bpf_tracing_prog_attach
====================
Link: https://lore.kernel.org/r/20210116002025.15706-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
- Set the minimum GCC version to 5.1 for arm64 due to earlier compiler
bugs.
- Make atomic helpers __always_inline to avoid a section mismatch when
compiling with clang.
- Fix the CMA and crashkernel reservations to use ZONE_DMA (remove the
arm64_dma32_phys_limit variable, no longer needed with a dynamic
ZONE_DMA sizing in 5.11).
- Remove redundant IRQ flag tracing that was leaving lockdep
inconsistent with the hardware state.
- Revert perf events based hard lockup detector that was causing
smp_processor_id() to be called in preemptible context.
- Some trivial cleanups - spelling fix, renaming S_FRAME_SIZE to
PT_REGS_SIZE, function prototypes added.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAmAB2zoACgkQa9axLQDI
XvE2vA//Vjh7bKMlNtocP7oA0/FVA7i9tKgNxDYmxjYvl6qDg26V7aDMelIi9H6l
14k+Wbf2Eqkav3+aAGEwdXuaoYPGrZIfVkPf+BbuviluoGsjwaGak0pc29ofDjE4
zaznZNXwO+joEqrtEZeTQO8fSagAupbgqf3ls/pRjBGVL6XEajhPqA+ccgQB71DI
O+L0Z1tzQDurABp4mwHGFRbOTMdN59OhxfyHijO3yu+RGLQcO3C29AwuMkzJuYuA
Fjng+VNS4mrKT8bsP0fpJa/oNekJ4jc/2OQaLxN+re8J+o6/EG6QGKdUL4VlEllk
eK0chdC/ZD1e6R9MSV0ZL1diYedi0vn+F9mGxwNiSKtWzw0KqPEy7liP0KWQGVyF
NALShoGkKMglYj2fmOrZhs7E4vAQGPlk7hROssDTM1RSu/7JpBwEJRb9QsOM4p4T
HgcCoF4smnTCmbyVkcMYgZxMrJ5YOjchTu8uvUwHy6D//ZMQmDE2m3u9Svziu+y3
Nk8VpIp0HNDZyA7ZTeOrmo2jSEOoK3tDVKiqorPSmZd5mp35BMCr1q/Rcu6uaywr
4ym/CfmvQIqObzQOYbBze6QZs4DLqERP1p0WgEnWBE8W2rP4UcGaNdegbEsMkrfq
CiCGcNyfpiJNZSc8VF1/1OFY/yABcWem1pVM7F254zA5G2wZX1U=
=W9th
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- Set the minimum GCC version to 5.1 for arm64 due to earlier compiler
bugs.
- Make atomic helpers __always_inline to avoid a section mismatch when
compiling with clang.
- Fix the CMA and crashkernel reservations to use ZONE_DMA (remove the
arm64_dma32_phys_limit variable, no longer needed with a dynamic
ZONE_DMA sizing in 5.11).
- Remove redundant IRQ flag tracing that was leaving lockdep
inconsistent with the hardware state.
- Revert perf events based hard lockup detector that was causing
smp_processor_id() to be called in preemptible context.
- Some trivial cleanups - spelling fix, renaming S_FRAME_SIZE to
PT_REGS_SIZE, function prototypes added.
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: selftests: Fix spelling of 'Mismatch'
arm64: syscall: include prototype for EL0 SVC functions
compiler.h: Raise minimum version of GCC to 5.1 for arm64
arm64: make atomic helpers __always_inline
arm64: rename S_FRAME_SIZE to PT_REGS_SIZE
Revert "arm64: Enable perf events based hard lockup detector"
arm64: entry: remove redundant IRQ flag tracing
arm64: Remove arm64_dma32_phys_limit and its uses
The prog_test that's added depends on Clang/LLVM features added by
Yonghong in commit 286daafd6512 (was https://reviews.llvm.org/D72184).
Note the use of a define called ENABLE_ATOMICS_TESTS: this is used
to:
- Avoid breaking the build for people on old versions of Clang
- Avoid needing separate lists of test objects for no_alu32, where
atomics are not supported even if Clang has the feature.
The atomics_test.o BPF object is built unconditionally both for
test_progs and test_progs-no_alu32. For test_progs, if Clang supports
atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper
test code. Otherwise, progs and global vars are defined anyway, as
stubs; this means that the skeleton user code still builds.
The atomics_test.o userspace object is built once and used for both
test_progs and test_progs-no_alu32. A variable called skip_tests is
defined in the BPF object's data section, which tells the userspace
object whether to skip the atomics test.
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210114181751.768687-11-jackmanb@google.com
A subsequent patch will add additional atomic operations. These new
operations will use the same opcode field as the existing XADD, with
the immediate discriminating different operations.
In preparation, rename the instruction mode BPF_ATOMIC and start
calling the zero immediate BPF_ADD.
This is possible (doesn't break existing valid BPF progs) because the
immediate field is currently reserved MBZ and BPF_ADD is zero.
All uses are removed from the tree but the BPF_XADD definition is
kept around to avoid breaking builds for people including kernel
headers.
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Björn Töpel <bjorn.topel@gmail.com>
Link: https://lore.kernel.org/bpf/20210114181751.768687-5-jackmanb@google.com
Add separate option to nettest to specify local address
binding in client mode.
Signed-off-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>