Xin Long says:
====================
There are 4 events defined in rfc5061 missed in linux sctp:
SCTP_ADDR_ADDED, SCTP_ADDR_REMOVED, SCTP_ADDR_MADE_PRIM and
SCTP_SEND_FAILED_EVENT.
This patchset is to add them up.
====================
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This patch is to add a new event SCTP_SEND_FAILED_EVENT described in
rfc6458#section-6.1.11. It's a update of SCTP_SEND_FAILED event:
struct sctp_sndrcvinfo ssf_info is replaced with
struct sctp_sndinfo ssfe_info in struct sctp_send_failed_event.
SCTP_SEND_FAILED is being deprecated, but we don't remove it in this
patch. Both are being processed in sctp_datamsg_destroy() when the
corresp event flag is set.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
sctp_ulpevent_nofity_peer_addr_change() would be called in
sctp_assoc_set_primary() to send SCTP_ADDR_MADE_PRIM event
when this transport is set to the primary path of the asoc.
This event is described in rfc6458#section-6.1.2:
SCTP_ADDR_MADE_PRIM: This address has now been made the primary
destination address. This notification is provided whenever an
address is made primary.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
sctp_ulpevent_nofity_peer_addr_change() is called in
sctp_assoc_rm_peer() to send SCTP_ADDR_REMOVED event
when this transport is removed from the asoc.
This event is described in rfc6458#section-6.1.2:
SCTP_ADDR_REMOVED: The address is no longer part of the
association.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
A helper sctp_ulpevent_nofity_peer_addr_change() will be extracted
to make peer_addr_change event and enqueue it, and the helper will
be called in sctp_assoc_add_peer() to send SCTP_ADDR_ADDED event.
This event is described in rfc6458#section-6.1.2:
SCTP_ADDR_ADDED: The address is now part of the association.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
'struct xdp_umem_reg' has 4 bytes of padding at the end that makes
valgrind complain about passing uninitialized stack memory to the
syscall:
Syscall param socketcall.setsockopt() points to uninitialised byte(s)
at 0x4E7AB7E: setsockopt (in /usr/lib64/libc-2.29.so)
by 0x4BDE035: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:172)
Uninitialised value was created by a stack allocation
at 0x4BDDEBA: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:140)
Padding bytes appeared after introducing of a new 'flags' field.
memset() is required to clear them.
Fixes: 10d30e3017 ("libbpf: add flags to umem config")
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191009164929.17242-1-i.maximets@ovn.org
Andrii Nakryiko says:
====================
Fix BTF-to-C logic of handling padding at the end of a struct. Fix existing
test that should have captured this. Also move test_btf_dump into a test_progs
test to leverage common infrastructure.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Existing padding test case for btf_dump has a good test that was
supposed to test padding generation at the end of a struct, but its
expected output was specified incorrectly. Fix this.
Fixes: 2d2a3ad872 ("selftests/bpf: add btf_dump BTF-to-C conversion tests")
Reported-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191008231009.2991130-4-andriin@fb.com
Fix a case where explicit padding at the end of a struct is necessary
due to non-standart alignment requirements of fields (which BTF doesn't
capture explicitly).
Fixes: 351131b51c ("libbpf: add btf_dump API for BTF-to-C conversion")
Reported-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191008231009.2991130-2-andriin@fb.com
All the registers and the functionalities used in the callback
dwmac5_flex_pps_config() are common between dwmac 4.10a [1] and
5.00a [2].
Reuse the same callback for dwmac 4.10a too.
Tested on STM32MP15x, based on dwmac 4.10a.
[1] DWC Ethernet QoS Databook 4.10a October 2014
[2] DWC Ethernet QoS Databook 5.00a September 2017
Signed-off-by: Antonio Borneo <antonio.borneo@st.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
For detailed timestamping of operations.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCgAxFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAl2cu5ATHGJyb29uaWVA
a2VybmVsLm9yZwAKCRAk1otyXVSH0HYXB/49vT69JJt0wGHR9HOjThnyUHU8s5D2
dJE6lsAbOY8zmGGE5b8yF78fHSA6NrXUWIK6/bNExSfNjwdwrJuVEcxJoi3HtcZA
AWxAvpgR55HETiwudEN5ZxUw6FG5O+XAt0Eh7I4Gd+NaMbKrVDfy/cy3J78NK5Cr
tDsOamv/nuaCVmoXxthU/u4wQTctBSvbeHMJWUePC/TOMnhxBmNrbsv6kDaQoE2A
dWIYjgeUY4s/518Zv//AAhMUsi11N6V3pDyr3MIZJLTh7DDiDgaxx5OUpn6Xgi6Z
vSeq7+elh5C9Ta6yg/DQnk8ZPhrHbFdvUiBps8+u1E/tAo6z70YISfcs
=P2Sr
-----END PGP SIGNATURE-----
Merge tag 'spi-ptp-api' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull in a dependency for Vladimir's work on more precise
packet time stamping.
Mark Brown says:
====================
spi: Add a PTP API
For detailed timestamping of operations.
====================
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This reverts commit 0ad646c81b.
As noticed by Jakub, this is no longer needed after
commit 11fc7d5a0a ("tun: fix memory leak in error path")
This no longer exports dev_get_valid_name() for the exclusive
use of tun driver.
Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
__tipc_nl_compat_dumpit() calls tipc_nl_publ_dump() which expects
the attrs to be available by genl_dumpit_info(cb)->attrs. Add info
struct and attr parsing in compat dumpit function.
Reported-by: syzbot+8d37c50ffb0f52941a5e@syzkaller.appspotmail.com
Fixes: 057af70713 ("net: tipc: have genetlink code to parse the attrs during dumpit")
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Huazhong Tan says:
====================
This patch-set includes some new features for the HNS3 ethernet
controller driver.
[patch 01/06] adds support for configuring VF link status on the host.
[patch 02/06] adds support for configuring VF spoof check.
[patch 03/06] adds support for configuring VF trust.
[patch 04/06] adds support for configuring VF bandwidth on the host.
[patch 05/06] adds support for configuring VF MAC on the host.
[patch 06/06] adds support for tx-scatter-gather-fraglist.
====================
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
The hardware supports up to 8 TX BD for non-tso skb and up to
63 TX BD for TSO skb. Currently, the hns3 driver supports RX skb
with fraglist when HW GRO is enabled, when the stack forwards a
RX skb with fraglist, the stack need to linearize the skb before
sending to other interface without TX fraglist support.
This patch adds support for TX fraglist. The performance increases
from 1 GByte to 1.5 GByte for one iperf TCP stream during
forwarding test after this patch. BTW, the minimum BD number of
ring should be updated to 72 for supporting TX fraglist.
This patch also changes the error handling of some function that
called by hns3_fill_desc, which returns BD num when there is no
error, change some macro to more meaningful name.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This patch adds support of configuring VF MAC from the host
for the HNS3 driver.
BTW, the parameter init in the hns3_init_mac_addr is
unnecessary now, since the MAC address will not read from
NCL_CONFIG when doing reset, so it should be removed,
otherwise it will affect VF's MAC address initialization.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This patch adds support for configuring bandwidth of VF on the host
for HNS3 drivers.
Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This patch adds supports for setting VF trust by host. If specified
VF is trusted, then it can enable promisc(include allmulti mode).
If a trusted VF enabled promisc, and being untrusted, host will
disable promisc mode for this VF.
For VF will update its promisc mode from set_rx_mode now, so it's
unnecessary to set broadcst promisc mode when initialization or
reset.
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This patch adds support for spoof check configuration for VFs.
When it is enabled, "spoof checking" is done for both mac address
and VLAN. For each VF, the HW ensures that the source MAC address
(or VLAN) of every outgoing packet exists in the MAC-list (or
VLAN-list) configured for RX filtering for that VF. If not,
the packet is dropped.
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
This patch adds support to configure VF link properties.
The options are auto, enable, and disable. Even if the PF
is down, the communication between VFs will be normal
if the VFs are set to enable. The commands are as follows:
'ip link set <pf> vf <vf_id> state <auto|enable|disable>'
change the VF status
'ip link show'
show the setting status
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Andrii Nakryiko says:
====================
This patch set makes bpf_helpers.h and bpf_endian.h a part of libbpf itself
for consumption by user BPF programs, not just selftests. It also splits off
tracing helpers into bpf_tracing.h, which also becomes part of libbpf. Some of
the legacy stuff (BPF_ANNOTATE_KV_PAIR, load_{byte,half,word}, bpf_map_def
with unsupported fields, etc, is extracted into selftests-only bpf_legacy.h.
All the selftests and samples are switched to use libbpf's headers and
selftests' ones are removed.
As part of this patch set we also add BPF_CORE_READ variadic macros, that are
simplifying BPF CO-RE reads, especially the ones that have to follow few
pointers. E.g., what in non-BPF world (and when using BCC) would be:
int x = s->a->b.c->d; /* s, a, and b.c are pointers */
Today would have to be written using explicit bpf_probe_read() calls as:
void *t;
int x;
bpf_probe_read(&t, sizeof(t), s->a);
bpf_probe_read(&t, sizeof(t), ((struct b *)t)->b.c);
bpf_probe_read(&x, sizeof(x), ((struct c *)t)->d);
This is super inconvenient and distracts from program logic a lot. Now, with
added BPF_CORE_READ() macros, you can write the above as:
int x = BPF_CORE_READ(s, a, b.c, d);
Up to 9 levels of pointer chasing are supported, which should be enough for
any practical purpose, hopefully, without adding too much boilerplate macro
definitions (though there is admittedly some, given how variadic and recursive
C macro have to be implemented).
There is also BPF_CORE_READ_INTO() variant, which relies on caller to allocate
space for result:
int x;
BPF_CORE_READ_INTO(&x, s, a, b.c, d);
Result of last bpf_probe_read() call in the chain of calls is the result of
BPF_CORE_READ_INTO(). If any intermediate bpf_probe_read() aall fails, then
all the subsequent ones will fail too, so this is sufficient to know whether
overall "operation" succeeded or not. No short-circuiting of bpf_probe_read()s
is done, though.
BPF_CORE_READ_STR_INTO() is added as well, which differs from
BPF_CORE_READ_INTO() only in that last bpf_probe_read() call (to read final
field after chasing pointers) is replaced with bpf_probe_read_str(). Result of
bpf_probe_read_str() is returned as a result of BPF_CORE_READ_STR_INTO() macro
itself, so that applications can track return code and/or length of read
string.
Patch set outline:
- patch #1 undoes previously added GCC-specific bpf-helpers.h include;
- patch #2 splits off legacy stuff we don't want to carry over;
- patch #3 adjusts CO-RE reloc tests to avoid subsequent naming conflict with
BPF_CORE_READ;
- patch #4 splits off bpf_tracing.h;
- patch #5 moves bpf_{helpers,endian,tracing}.h and bpf_helper_defs.h
generation into libbpf and adjusts Makefiles to include libbpf for header
search;
- patch #6 adds variadic BPF_CORE_READ() macro family, as described above;
- patch #7 adds tests to verify all possible levels of pointer nestedness for
BPF_CORE_READ(), as well as correctness test for BPF_CORE_READ_STR_INTO().
v4->v5:
- move BPF_CORE_READ() stuff into bpf_core_read.h header (Alexei);
v3->v4:
- rebase on latest bpf-next master;
- bpf_helper_defs.h generation is moved into libbpf's Makefile;
v2->v3:
- small formatting fixes and macro () fixes (Song);
v1->v2:
- fix CO-RE reloc tests before bpf_helpers.h move (Song);
- split off legacy stuff we don't want to carry over (Daniel, Toke);
- split off bpf_tracing.h (Daniel);
- fix samples/bpf build (assuming other fixes are applied);
- switch remaining maps either to bpf_map_def_legacy or BTF-defined maps;
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Validate BPF_CORE_READ correctness and handling of up to 9 levels of
nestedness using cyclic task->(group_leader->)*->tgid chains.
Also add a test of maximum-dpeth BPF_CORE_READ_STR_INTO() macro.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-8-andriin@fb.com
Add few macros simplifying BCC-like multi-level probe reads, while also
emitting CO-RE relocations for each read.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-7-andriin@fb.com
Move bpf_helpers.h, bpf_tracing.h, and bpf_endian.h into libbpf. Move
bpf_helper_defs.h generation into libbpf's Makefile. Ensure all those
headers are installed along the other libbpf headers. Also, adjust
selftests and samples include path to include libbpf now.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-6-andriin@fb.com
Split-off PT_REGS-related helpers into bpf_tracing.h header. Adjust
selftests and samples to include it where necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-5-andriin@fb.com
To allow adding a variadic BPF_CORE_READ macro with slightly different
syntax and semantics, define CORE_READ in CO-RE reloc tests, which is
a thin wrapper around low-level bpf_core_read() macro, which in turn is
just a wrapper around bpf_probe_read().
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-4-andriin@fb.com
Split off few legacy things from bpf_helpers.h into separate
bpf_legacy.h file:
- load_{byte|half|word};
- remove extra inner_idx and numa_node fields from bpf_map_def and
introduce bpf_map_def_legacy for use in samples;
- move BPF_ANNOTATE_KV_PAIR into bpf_legacy.h.
Adjust samples and selftests accordingly by either including
bpf_legacy.h and using bpf_map_def_legacy, or switching to BTF-defined
maps altogether.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-3-andriin@fb.com
Having GCC provide its own bpf-helper.h is not the right approach and is
going to be changed. Undo bpf_helpers.h change before moving
bpf_helpers.h into libbpf.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-2-andriin@fb.com
There is a spelling mistake in a NL_SET_ERR_MSG_MOD message. Fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Don't populate const arrays on the stack but instead make them
static. Makes the object code smaller by 1058 bytes.
Before:
text data bss dec hex filename
29879 6144 0 36023 8cb7 drivers/net/phy/mscc.o
After:
text data bss dec hex filename
28437 6528 0 34965 8895 drivers/net/phy/mscc.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Don't populate the array exp_mask on the stack but instead make it
static. Makes the object code smaller by 224 bytes.
Before:
text data bss dec hex filename
77832 2290 0 80122 138fa ethernet/netronome/nfp/bpf/jit.o
After:
text data bss dec hex filename
77544 2354 0 79898 1381a ethernet/netronome/nfp/bpf/jit.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
SPI is one of the interfaces used to access devices which have a POSIX
clock driver (real time clocks, 1588 timers etc). The fact that the SPI
bus is slow is not what the main problem is, but rather the fact that
drivers don't take a constant amount of time in transferring data over
SPI. When there is a high delay in the readout of time, there will be
uncertainty in the value that has been read out of the peripheral.
When that delay is constant, the uncertainty can at least be
approximated with a certain accuracy which is fine more often than not.
Timing jitter occurs all over in the kernel code, and is mainly caused
by having to let go of the CPU for various reasons such as preemption,
servicing interrupts, going to sleep, etc. Another major reason is CPU
dynamic frequency scaling.
It turns out that the problem of retrieving time from a SPI peripheral
with high accuracy can be solved by the use of "PTP system
timestamping" - a mechanism to correlate the time when the device has
snapshotted its internal time counter with the Linux system time at that
same moment. This is sufficient for having a precise time measurement -
it is not necessary for the whole SPI transfer to be transmitted "as
fast as possible", or "as low-jitter as possible". The system has to be
low-jitter for a very short amount of time to be effective.
This patch introduces a PTP system timestamping mechanism in struct
spi_transfer. This is to be used by SPI device drivers when they need to
know the exact time at which the underlying device's time was
snapshotted. More often than not, SPI peripherals have a very exact
timing for when their SPI-to-interconnect bridge issues a transaction
for snapshotting and reading the time register, and that will be
dependent on when the SPI-to-interconnect bridge figures out that this
is what it should do, aka as soon as it sees byte N of the SPI transfer.
Since spi_device drivers are the ones who'd know best how the peripheral
behaves in this regard, expose a mechanism in spi_transfer which allows
them to specify which word (or word range) from the transfer should be
timestamped.
Add a default implementation of the PTP system timestamping in the SPI
core. This is not going to be satisfactory performance-wise, but should
at least increase the likelihood that SPI device drivers will use PTP
system timestamping in the future.
There are 3 entry points from the core towards the SPI controller
drivers:
- transfer_one: The driver is passed individual spi_transfers to
execute. This is the easiest to timestamp.
- transfer_one_message: The core passes the driver an entire spi_message
(a potential batch of spi_transfers). The core puts the same pre and
post timestamp to all transfers within a message. This is not ideal,
but nothing better can be done by default anyway, since the core has
no insight into how the driver batches the transfers.
- transfer: Like transfer_one_message, but for unqueued drivers (i.e.
the driver implements its own queue scheduling).
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Link: https://lore.kernel.org/r/20190905010114.26718-3-olteanv@gmail.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently, at xdp_adjust_tail_kern.c, MAX_PCKT_SIZE is limited
to 600. To make this size flexible, static global variable
'max_pcktsz' is added.
By updating new packet size from the user space, xdp_adjust_tail_kern.o
will use this value as a new max packet size.
This static global variable can be accesible from .data section with
bpf_object__find_map* from user space, since it is considered as
internal map (accessible with .bss/.data/.rodata suffix).
If no '-P <MAX_PCKT_SIZE>' option is used, the size of maximum packet
will be 600 as a default.
For clarity, change the helper to fetch map from 'bpf_map__next'
to 'bpf_object__find_map_fd_by_name'. Also, changed the way to
test prog_fd, map_fd from '!= 0' to '< 0', since fd could be 0
when stdin is closed.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191007172117.3916-1-danieltimlee@gmail.com
Stanislav Fomichev says:
====================
While having a per-net-ns flow dissector programs is convenient for
testing, security-wise it's better to have only one vetted global
flow dissector implementation.
Let's have a convention that when BPF flow dissector is installed
in the root namespace, child namespaces can't override it.
The intended use-case is to attach global BPF flow dissector
early from the init scripts/systemd. Attaching global dissector
is prohibited if some non-root namespace already has flow dissector
attached. Also, attaching to non-root namespace is prohibited
when there is flow dissector attached to the root namespace.
v3:
* drop extra check and empty line (Andrii Nakryiko)
v2:
* EPERM -> EEXIST (Song Liu)
* Make sure we don't have dissector attached to non-root namespaces
when attaching the global one (Andrii Nakryiko)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Make sure non-root namespaces get an error if root flow dissector is
attached.
Cc: Petar Penkov <ppenkov@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Always use init_net flow dissector BPF program if it's attached and fall
back to the per-net namespace one. Also, deny installing new programs if
there is already one attached to the root namespace.
Users can still detach their BPF programs, but can't attach any
new ones (-EEXIST).
Cc: Petar Penkov <ppenkov@google.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
As part of libbpf in 5e61f27070 ("libbpf: stop enforcing kern_version,
populate it for users") non-LIBBPF_API __bpf_object__open_xattr() API
was removed from libbpf.h header. This broke bpftool, which relied on
that function. This patch fixes the build by switching to newly added
bpf_object__open_file() which provides the same capabilities, but is
official and future-proof API.
v1->v2:
- fix prog_type shadowing (Stanislav).
Fixes: 5e61f27070 ("libbpf: stop enforcing kern_version, populate it for users")
Reported-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20191007225604.2006146-1-andriin@fb.com
Current Makefile dependency chain is not strict enough and allows
test_attach_probe.o to be built before test_progs's
prog_test/attach_probe.o is built, which leads to assembler complaining
about missing included binary.
This patch is a minimal fix to fix this issue by enforcing that
test_attach_probe.o (BPF object file) is built before
prog_tests/attach_probe.c is attempted to be compiled.
Fixes: 928ca75e59 ("selftests/bpf: switch tests to new bpf_object__open_{file, mem}() APIs")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191007204149.1575990-1-andriin@fb.com
Ioana Ciornei says:
====================
dpaa2-eth: misc cleanup
This patch set consists of some cleanup patches ranging from removing dead
code to fixing a minor issue in ethtool stats. Also, unbounded while loops
are removed from the driver by adding a maximum number of retries for DPIO
portal commands.
Changes in v2:
- return -ETIMEDOUT where possible if the number of retries is hit
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Throughout the driver there are several places where we wait
indefinitely for DPIO portal commands to be executed, while
the portal returns a busy response code.
Even though in theory we are guaranteed the portals become
available eventually, in practice the QBMan hardware module
may become unresponsive in various corner cases.
Make sure we can never get stuck in an infinite while loop
by adding a retry counter for all portal commands.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't print error message for a successful return value.
Fixes: d84c3a4ded ("dpaa2-eth: Add new DPNI statistics counters")
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove one function call whose result was not used anywhere.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't populate the array tick_array on the stack but instead make it
static. Makes the object code smaller by 29 bytes.
Before:
text data bss dec hex filename
19191 432 0 19623 4ca7 hisilicon/hns3/hns3pf/hclge_tm.o
After:
text data bss dec hex filename
19098 496 0 19594 4c8a hisilicon/hns3/hns3pf/hclge_tm.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't populate the arrays port_map and sl_map on the stack but
instead make them static. Makes the object code smaller by 64 bytes.
Before:
text data bss dec hex filename
49575 6872 64 56511 dcbf hisilicon/hns/hns_dsaf_main.o
After:
text data bss dec hex filename
49350 7032 64 56446 dc7e hisilicon/hns/hns_dsaf_main.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski says:
====================
net/tls: minor micro optimizations
This set brings a number of minor code changes from my tree which
don't have a noticeable impact on performance but seem reasonable
nonetheless.
First sk_msg_sg copy array is converted to a bitmap, zeroing that
structure takes a lot of time, hence we should try to keep it
small.
Next two conditions are marked as unlikely, GCC seemed to had
little trouble correctly reasoning about those.
Patch 4 adds parameters to tls_device_decrypted() to avoid
walking structures, as all callers already have the relevant
pointers.
Lastly two boolean members of TLS context structures are
converted to a bitfield.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Use a single bit instead of boolean to remember if packet
was already decrypted.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>