Commit Graph

1137093 Commits

Author SHA1 Message Date
Phil Sutter
8daa8fde3f netfilter: nf_tables: Introduce NFT_MSG_GETRULE_RESET
Analogous to NFT_MSG_GETOBJ_RESET, but for rules: Reset stateful
expressions like counters or quotas. The latter two are the only
consumers, adjust their 'dump' callbacks to respect the parameter
introduced earlier.

Signed-off-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2022-11-15 10:53:17 +01:00
Phil Sutter
7d34aa3e03 netfilter: nf_tables: Extend nft_expr_ops::dump callback parameters
Add a 'reset' flag just like with nft_object_ops::dump. This will be
useful to reset "anonymous stateful objects", e.g. simple rule counters.

No functional change intended.

Signed-off-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2022-11-15 10:46:34 +01:00
Peng Wu
7394c2dd62 netfilter: nft_inner: fix return value check in nft_inner_parse_l2l3()
In nft_inner_parse_l2l3(), the return value of skb_header_pointer() is
'veth' instead of 'eth' when case 'htons(ETH_P_8021Q)' and fix it.

Fixes: 3a07327d10 ("netfilter: nft_inner: support for inner tunnel header matching")
Signed-off-by: Peng Wu <wupeng58@huawei.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2022-11-01 12:11:01 +01:00
Pablo Neira Ayuso
66394126bf netfilter: nft_payload: use __be16 to store gre version
GRE_VERSION and GRE_VERSION0 are expressed in network byte order,
use __be16. Uncovered by sparse:

net/netfilter/nft_payload.c:112:25: warning: incorrect type in assignment (different base types)
net/netfilter/nft_payload.c:112:25:    expected unsigned int [usertype] version
net/netfilter/nft_payload.c:112:25:    got restricted __be16
net/netfilter/nft_payload.c:114:22: warning: restricted __be16 degrades to integer

Fixes: c247897d7c ("netfilter: nft_payload: access GRE payload via inner offset")
Reported-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2022-11-01 12:11:00 +01:00
Jakub Kicinski
6f1a298b2e Merge branch 'inet-add-drop-monitor-support'
Eric Dumazet says:

====================
inet: add drop monitor support

I recently tried to analyse flakes in ip_defrag selftest.
This failed miserably.

IPv4 and IPv6 reassembly units are causing false kfree_skb()
notifications. It is time to deal with this issue.

First two patches are changing core networking to better
deal with eventual skb frag_list chains, in respect
of kfree_skb/consume_skb status.

Last three patches are adding three new drop reasons,
and make sure skbs that have been reassembled into
a large datagram are no longer viewed as dropped ones.

After this, understanding why ip_defrag selftest is flaky
is possible using standard drop monitoring tools.
====================

Link: https://lore.kernel.org/r/20221029154520.2747444-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:14:30 -07:00
Eric Dumazet
3bdfb04f13 net: dropreason: add SKB_DROP_REASON_FRAG_TOO_FAR
IPv4 reassembly unit can decide to drop frags based on
/proc/sys/net/ipv4/ipfrag_max_dist sysctl.

Add a specific drop reason to track this specific
and weird case.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:14:27 -07:00
Eric Dumazet
77adfd3a1d net: dropreason: add SKB_DROP_REASON_FRAG_REASM_TIMEOUT
Used to track skbs freed after a timeout happened
in a reassmbly unit.

Passing a @reason argument to inet_frag_rbtree_purge()
allows to use correct consumed status for frags
that have been successfully re-assembled.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:14:27 -07:00
Eric Dumazet
4ecbb1c27c net: dropreason: add SKB_DROP_REASON_DUP_FRAG
This is used to track when a duplicate segment received by various
reassembly units is dropped.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:14:26 -07:00
Eric Dumazet
511a3eda2f net: dropreason: propagate drop_reason to skb_release_data()
When an skb with a frag list is consumed, we currently
pretend all skbs in the frag list were dropped.

In order to fix this, add a @reason argument to skb_release_data()
and skb_release_all().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:14:26 -07:00
Eric Dumazet
0e84afe8eb net: dropreason: add SKB_CONSUMED reason
This will allow to simply use in the future:

	kfree_skb_reason(skb, reason);

Instead of repeating sequences like:

	if (dropped)
	    kfree_skb_reason(skb, reason);
	else
	    consume_skb(skb);

For instance, following patch in the series is adding
@reason to skb_release_data() and skb_release_all(),
so that we can propagate a meaningful @reason whenever
consume_skb()/kfree_skb() have to take care of a potential frag_list.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:14:26 -07:00
Florian Fainelli
b98deb2f98 net: systemport: Add support for RDMA overflow statistic counter
RDMA overflows can happen if the Ethernet controller does not have
enough bandwidth allocated at the memory controller level, report RDMA
overflows and deal with saturation, similar to the RBUF overflow
counter.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/20221028222141.3208429-1-f.fainelli@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:05:03 -07:00
Sergei Antonov
37c8489012 net: ftmac100: allow increasing MTU to make most use of single-segment buffers
If the FTMAC100 is used as a DSA master, then it is expected that frames
which are MTU sized on the wire facing the external switch port (1500
octets in L2 payload, plus L2 header) also get a DSA tag when seen by
the host port.

This extra tag increases the length of the packet as the host port sees
it, and the FTMAC100 is not prepared to handle frames whose length
exceeds 1518 octets (including FCS) at all.

Only a minimal rework is needed to support this configuration. Since
MTU-sized DSA-tagged frames still fit within a single buffer (RX_BUF_SIZE),
we just need to optimize the resource management rather than implement
multi buffer RX.

In ndo_change_mtu(), we toggle the FTMAC100_MACCR_RX_FTL bit to tell the
hardware to drop (or not) frames with an L2 payload length larger than
1500. We need to replicate the MACCR configuration in ftmac100_start_hw()
as well, since there is a hardware reset there which clears previous
settings.

The advantage of dynamically changing FTMAC100_MACCR_RX_FTL is that when
dev->mtu is at the default value of 1500, large frames are automatically
dropped in hardware and we do not spend CPU cycles dropping them.

Suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20221028183220.155948-3-saproj@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:02:57 -07:00
Vladimir Oltean
30f837b7b9 net: ftmac100: report the correct maximum MTU of 1500
The driver uses the MAX_PKT_SIZE (1518) for both MTU reporting and for
TX. However, the 2 places do not measure the same thing.

On TX, skb->len measures the entire L2 packet length (without FCS, which
software does not possess). So the comparison against 1518 there is
correct.

What is not correct is the reporting of dev->max_mtu as 1518. Since MTU
measures L2 *payload* length (excluding L2 overhead) and not total L2
packet length, it means that the correct max_mtu supported by this
device is the standard 1500. Anything higher than that will be dropped
on RX currently.

To fix this, subtract VLAN_ETH_HLEN from MAX_PKT_SIZE when reporting the
max_mtu, since that is the difference between L2 payload length and
total L2 length as seen by software.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20221028183220.155948-2-saproj@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:02:57 -07:00
Vladimir Oltean
55f6f3dbcf net: ftmac100: prepare data path for receiving single segment packets > 1514
Eliminate one check in the data path and move it elsewhere, to where our
real limitation is. We'll want to start processing "too long" frames in
the driver (currently there is a hardware MAC setting which drops
theses).

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20221028183220.155948-1-saproj@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:02:57 -07:00
Steffen Bätz
91e87045a5 net: dsa: mv88e6xxx: Add RGMII delay to 88E6320
Currently, the .port_set_rgmii_delay hook is missing for the 88E6320
family, which causes failure to retrieve an IP address via DHCP.

Add mv88e6320_port_set_rgmii_delay() that allows applying the RGMII
delay for ports 2, 5, and 6, which are the only ports that can be used
in RGMII mode.

Tested on a custom i.MX8MN board connected to an 88E6320 switch.

This change also applies safely to the 88E6321 variant.

The only difference between 88E6320 versus 88E6321 is the temperature
grade and pinout.

They share exactly the same MDIO register map for ports 2, 5, and 6,
which are the only ports that can be used in RGMII mode.

Signed-off-by: Steffen Bätz <steffen@innosonix.de>
[fabio: Improved commit log and extended it to mv88e6321_ops]
Signed-off-by: Fabio Estevam <festevam@denx.de>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20221028163158.198108-1-festevam@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 20:00:20 -07:00
Jakub Kicinski
eff1744e62 Merge branch 'rtnetlink-honour-nlm_f_echo-flag-in-rtnl_-new-del-link'
Hangbin Liu says:

====================
rtnetlink: Honour NLM_F_ECHO flag in rtnl_{new, del}link

Netlink messages are used for communicating between user and kernel space.
When user space configures the kernel with netlink messages, it can set the
NLM_F_ECHO flag to request the kernel to send the applied configuration back
to the caller. This allows user space to retrieve configuration information
that are filled by the kernel (either because these parameters can only be
set by the kernel or because user space let the kernel choose a default
value).

The kernel has support this feature in some places like RTM_{NEW, DEL}ADDR,
RTM_{NEW, DEL}ROUTE. This patch set handles NLM_F_ECHO flag and send link
info back after rtnl_{new, del}link.
====================

Link: https://lore.kernel.org/r/20221028084224.3509611-1-liuhangbin@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 18:10:25 -07:00
Hangbin Liu
f3a63cce1b rtnetlink: Honour NLM_F_ECHO flag in rtnl_delete_link
This patch use the new helper unregister_netdevice_many_notify() for
rtnl_delete_link(), so that the kernel could reply unicast when userspace
 set NLM_F_ECHO flag to request the new created interface info.

At the same time, the parameters of rtnl_delete_link() need to be updated
since we need nlmsghdr and portid info.

Suggested-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 18:10:21 -07:00
Hangbin Liu
d88e136cab rtnetlink: Honour NLM_F_ECHO flag in rtnl_newlink_create
This patch pass the netlink header message in rtnl_newlink_create() to
the new updated rtnl_configure_link(), so that the kernel could reply
unicast when userspace set NLM_F_ECHO flag to request the new created
interface info.

Suggested-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 18:10:21 -07:00
Hangbin Liu
77f4aa9a2a net: add new helper unregister_netdevice_many_notify
Add new helper unregister_netdevice_many_notify(), pass netlink message
header and portid, which could be used to notify userspace when flag
NLM_F_ECHO is set.

Make the unregister_netdevice_many() as a wrapper of new function
unregister_netdevice_many_notify().

Suggested-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 18:10:21 -07:00
Hangbin Liu
1d997f1013 rtnetlink: pass netlink message header and portid to rtnl_configure_link()
This patch pass netlink message header and portid to rtnl_configure_link()
All the functions in this call chain need to add the parameters so we can
use them in the last call rtnl_notify(), and notify the userspace about
the new link info if NLM_F_ECHO flag is set.

- rtnl_configure_link()
  - __dev_notify_flags()
    - rtmsg_ifinfo()
      - rtmsg_ifinfo_event()
        - rtmsg_ifinfo_build_skb()
        - rtmsg_ifinfo_send()
	  - rtnl_notify()

Also move __dev_notify_flags() declaration to net/core/dev.h, as Jakub
suggested.

Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 18:10:21 -07:00
Sean Anderson
37fe9b9816 net: dpaa2: Add some debug prints on deferred probe
When this device is deferred, there is often no way to determine what
the cause was. Add some debug prints to make it easier to figure out
what is blocking the probe.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Link: https://lore.kernel.org/r/20221027190005.400839-1-sean.anderson@seco.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-31 17:35:24 -07:00
Colin Ian King
0cf9deb300 net: mvneta: Remove unused variable i
Variable i is just being incremented and it's never used anywhere else. The
variable and the increment are redundant so remove it.

Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:47:31 +00:00
David S. Miller
5565dbd01e Merge branch 'ptp-adjfine'
Jacob Keller says:

====================
ptp: convert drivers to .adjfine

Many drivers implementing PTP have not yet migrated to the new .adjfine
frequency adjustment implementation.

A handful of these drivers use hardware with a simple increment value which
is adjusted by multiplying by the adjustment factor and then dividing by
1 billion. This calculation is very easy to convert to .adjfine, by simply
updating the divisor.

Introduce new helper functions, diff_by_scaled_ppm and adjust_by_scaled_ppm
which perform the most common calculations used by drivers for this purpose.

The adjust_by_scaled_ppm takes the base increment and scaled PPM value, and
calculates the new increment to use.

A few drivers need the difference and direction rather than a raw increment
value. The diff_by_scaled_ppm calculates the difference and returns true if
it should be a subtraction, false otherwise. This most closely aligns with
existing driver implementations.

I previously submitted v1 of this series at [1], and got some feedback only
on a handful of drivers. In the interest of merging the changes which have
received feedback, I've dropped the following drivers out of this send:

 * ptp_phc
 * ptp_ipx46x
 * tg3
 * hclge
 * stmac
 * cpts

I plan to submit those drivers changes again at a later date. As before,
there are some drivers which are not trivial to convert to the new helper
functions. While they may be able to work, their implementation is different
and I lack the hardware or datasheets to determine what the correct
implementation would be.

* drivers/net/ethernet/broadcom/bnx2x
* drivers/net/ethernet/broadcom/bnxt
* drivers/net/ethernet/cavium/liquidio
* drivers/net/ethernet/chelsio/cxgb4
* drivers/net/ethernet/freescale
* drivers/net/ethernet/qlogic/qed
* drivers/net/ethernet/qlogic/qede
* drivers/net/ethernet/sfc
* drivers/net/ethernet/sfc/siena
* drivers/net/ethernet/ti/am65-cpts.c
* drivers/ptp/ptp_dte.c

My end goal is to drop the .adjfreq implementation entirely, and to that end
I plan on modifying these drivers in the future to directly use
scaled_ppm_to_ppb as the simplest method to convert them.

Changes since v2:
* Rebased to allow landing in 6.2
* Added Richard's Acked-by

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Siva Reddy Kallam <siva.kallam@broadcom.com>
Cc: Prashant Sreedharan <prashant@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>
Cc: Yisen Zhuang <yisen.zhuang@huawei.com>
Cc: Salil Mehta <salil.mehta@huawei.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: Tariq Toukan <tariqt@nvidia.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Bryan Whitehead <bryan.whitehead@microchip.com>
Cc: Sergey Shtylyov <s.shtylyov@omp.ru>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Jose Abreu <joabreu@synopsys.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Vivek Thampi <vithampi@vmware.com>
Cc: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
Cc: Jie Wang <wangjie125@huawei.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Guangbin Huang <huangguangbin2@huawei.com>
Cc: Eran Ben Elisha <eranbe@nvidia.com>
Cc: Aya Levin <ayal@nvidia.com>
Cc: Cai Huoqing <cai.huoqing@linux.dev>
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Cc: Phil Edworthy <phil.edworthy@renesas.com>
Cc: Jiasheng Jiang <jiasheng@iscas.ac.cn>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Wan Jiabing <wanjiabing@vivo.com>
Cc: Lv Ruyi <lv.ruyi@zte.com.cn>
Cc: Arnd Bergmann <arnd@arndb.de>
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
337ffae0e4 ptp: xgbe: convert to .adjfine and adjust_by_scaled_ppm
The xgbe implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this driver to .adjfine and use adjust_by_scaled_ppm to calculate
the new addend value.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
673dd2c788 ptp: ravb: convert to .adjfine and adjust_by_scaled_ppm
The ravb implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this driver to .adjfine and use the adjust_by_scaled_ppm helper
function to calculate the new addend.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Sergey Shtylyov <s.shtylyov@omp.ru>
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Phil Edworthy <phil.edworthy@renesas.com>
Cc: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Cc: linux-renesas-soc@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
8bc900cbff ptp: lan743x: use diff_by_scaled_ppm in .adjfine implementation
Update the lan743x driver to use the recently added diff_by_scaled_ppm
helper function. This reduces the amount of code required in lan743x_ptp.c
driver file.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Bryan Whitehead <bryan.whitehead@microchip.com>
Cc: UNGLinuxDriver@microchip.com
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
c56dff6a9a ptp: lan743x: remove .adjfreq implementation
The lan743x driver implements both .adjfreq and .adjfine, but the core PTP
subsystem prefers .adjfine if implemented. There is no reason to carry a
.adjfreq implementation, so we can remove it.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Bryan Whitehead <bryan.whitehead@microchip.com>
Cc: UNGLinuxDriver@microchip.com
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
d8aad3f369 ptp: mlx5: convert to .adjfine and adjust_by_scaled_ppm
The mlx5 implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this to the .adjfine interface and use adjust_by_scaled_ppm for the
calculation  of the new mult value.

Note that the mlx5_ptp_adjfreq_real_time function expects input in terms of
ppb, so use the scaled_ppm_to_ppb to convert before passing to this
function.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Shirly Ohnona <shirlyo@nvidia.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Gal Pressman <gal@nvidia.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Aya Levin <ayal@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
6ed795965e ptp: mlx4: convert to .adjfine and adjust_by_scaled_ppm
The mlx4 implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this driver to .adjfine and use adjust_by_scaled_ppm to perform the
calculation.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
73aa29a2b1 drivers: convert unsupported .adjfreq to .adjfine
A few PTP drivers implement a .adjfreq handler which indicates the
operation is not supported. Convert all of these to .adjfine.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Vivek Thampi <vithampi@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
1060707e38 ptp: introduce helpers to adjust by scaled parts per million
Many drivers implement the .adjfreq or .adjfine PTP op function with the
same basic logic:

  1. Determine a base frequency value
  2. Multiply this by the abs() of the requested adjustment, then divide by
     the appropriate divisor (1 billion, or 65,536 billion).
  3. Add or subtract this difference from the base frequency to calculate a
     new adjustment.

A few drivers need the difference and direction rather than the combined
new increment value.

I recently converted the Intel drivers to .adjfine and the scaled parts per
million (65.536 parts per billion) logic. To avoid overflow with minimal
loss of precision, mul_u64_u64_div_u64 was used.

The basic logic used by all of these drivers is very similar, and leads to
a lot of duplicate code to perform the same task.

Rather than keep this duplicate code, introduce diff_by_scaled_ppm and
adjust_by_scaled_ppm. These helper functions calculate the difference or
adjustment necessary based on the scaled parts per million input.

The diff_by_scaled_ppm function returns true if the difference should be
subtracted, and false otherwise.

Update the Intel drivers to use the new helper functions. Other vendor
drivers will be converted to .adjfine and this helper function in the
following changes.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Jacob Keller
b9a61b9779 ptp: add missing documentation for parameters
The ptp_find_pin_unlocked function and the ptp_system_timestamp structure
didn't document their parameters and fields. Fix this.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 11:14:16 +00:00
Frank
70479a4095 net: phy: Add driver for Motorcomm yt8521 gigabit ethernet phy
Add a driver for the motorcomm yt8521 gigabit ethernet phy. We have verified
 the driver on StarFive VisionFive development board, which is developed by
 Shanghai StarFive Technology Co., Ltd.. On the board, yt8521 gigabit ethernet
 phy works in utp mode, RGMII interface, supports 1000M/100M/10M speeds, and
 wol(magic package).

Signed-off-by: Frank <Frank.Sae@motor-comm.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 10:51:54 +00:00
Yang Yingliang
e8572f038a net: microchip: sparx5: kunit test: change test_callbacks and test_vctrl to static
test_callbacks and test_vctrl are only used in vcap_api_kunit.c now,
change them to static.

Fixes: 67d637516f ("net: microchip: sparx5: Adding KUNIT test for the VCAP API")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Reviewed-by: Steen Hegelund <Steen.Hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 10:48:32 +00:00
Jakub Kicinski
8c2a535e08 net: geneve: fix array of flexible structures warnings
New compilers don't like flexible array of flexible structs:

  include/net/geneve.h:62:34: warning: array of flexible structures

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 10:43:04 +00:00
Yang Yingliang
47aeed9d2c net: hns: hnae: remove unnecessary __module_get() and module_put()
hnae_ae_register() is called from hns_dsaf_probe(), the refcount of
module hnae has already be got in resolve_symbol() while calling the
function, so the __module_get()/module_put() can be removed.

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 09:18:14 +00:00
Jakub Kicinski
738136a0e3 netlink: split up copies in the ack construction
Clean up the use of unsafe_memcpy() by adding a flexible array
at the end of netlink message header and splitting up the header
and data copies.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 09:13:10 +00:00
Denis Kirjanov
eca485d221 drivers: net: convert to boolean for the mac_managed_pm flag
Signed-off-by: Dennis Kirjanov <dkirjanov@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 09:10:45 +00:00
Sebastian Reichel
8fc4deaa8b dt-bindings: net: snps,dwmac: Document queue config subnodes
The queue configuration is referenced by snps,mtl-rx-config and
snps,mtl-tx-config. Some in-tree DTs and the example put the
referenced config nodes directly beneath the root node, but
most in-tree DTs put it as child node of the dwmac node.

This adds proper description for this setup, which has the
advantage of validating the queue configuration node content.

The example is also updated to use the sub-node style, incl.
the axi bus configuration node, which got the same treatment
as the queues config in 5361660af6 ("dt-bindings: net: snps,dwmac:
Document stmmac-axi-config subnode").

Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-31 09:08:42 +00:00
Juhee Kang
f3fb589aeb net: remove unused netdev_unregistering()
Currently, use dev->reg_state == NETREG_UNREGISTERING to check the status
which is NETREG_UNREGISTERING, rather than using netdev_unregistering.
Also, A helper function which is netdev_unregistering on nedevice.h is no
longer used. Thus, netdev_unregistering removes from netdevice.h.

Signed-off-by: Juhee Kang <claudiajkang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-30 21:56:39 +00:00
Jakub Kicinski
02a97e02c6 mlx5-updates-2022-10-24
SW steering updates from Yevgeny Kliteynik:
 
 1) 1st Four patches: small fixes / optimizations for SW steering:
 
  - Patch 1: Don't abort destroy flow if failed to destroy table - continue
    and free everything else.
  - Patches 2 and 3 deal with fast teardown:
     + Skip sync during fast teardown, as PCI device is not there any more.
     + Check device state when polling CQ - otherwise SW steering keeps polling
       the CQ forever, because nobody is there to flush it.
  - Patch 4: Removing unneeded function argument.
 
 2) Deal with the hiccups that we get during rules insertion/deletion,
 which sometimes reach 1/4 of a second. While insertion/deletion rate
 improvement was not the focus here, it still is a by-product of removing these
 hiccups.
 
 Another by-product is the reduced standard deviation in measuring the duration
 of rules insertion/deletion bursts.
 
 In the testing we add K rules (warm-up phase), and then continuously do
 insertion/deletion bursts of N rules.
 During the test execution, the driver measures hiccups (amount and duration)
 and total time for insertion/deletion of a batch of rules.
 
 Here are some numbers, before and after these patches:
 
 +--------------------------------------------+-----------------+----------------+
 |                                            |   Create rules  |  Delete rules  |
 |                                            +--------+--------+--------+-------+
 |                                            | Before |  After | Before | After |
 +--------------------------------------------+--------+--------+--------+-------+
 | Max hiccup [msec]                          |    253 |     42 |    254 |    68 |
 +--------------------------------------------+--------+--------+--------+-------+
 | Avg duration of 10K rules add/remove [msec]| 140.07 | 124.32 | 106.99 | 99.51 |
 +--------------------------------------------+--------+--------+--------+-------+
 | Num of hiccups per 100K rules add/remove   |   7.77 |   7.97 |  12.60 | 11.57 |
 +--------------------------------------------+--------+--------+--------+-------+
 | Avg hiccup duration [msec]                 |  36.92 |  33.25 |  36.15 | 33.74 |
 +--------------------------------------------+--------+--------+--------+-------+
 
  - Patch 5: Allocate a short array on stack instead of dynamically- it is
    destroyed at the end of the function.
  - Patch 6: Rather than cleaning the corresponding chunk's section of
    ste_arrays on chunk deletion, initialize these areas upon chunk creation.
    Chunk destruction tend to come in large batches (during pool syncing),
    so instead of doing huge memory initialization during pool sync,
    we amortize this by doing small initsializations on chunk creation.
  - Patch 7: In order to simplifies error flow and allows cleaner addition
    of new pools, handle creation/destruction of all the domain's memory pools
    and other memory-related fields in a separate init/uninit functions.
  - Patch 8: During rehash, write each table row immediately instead of waiting
    for the whole table to be ready and writing it all - saves allocations
    of ste_send_info structures and improves performance.
  - Patch 9: Instead of allocating/freeing send info objects dynamically,
    manage them in pool. The number of send info objects doesn't depend on
    number of rules, so after pre-populating the pool with an initial batch of
    send info objects, the pool is not expected to grow.
    This way we save alloc/free during writing STEs to ICM, which by itself can
    sometimes take up to 40msec.
  - Patch 10: Allocate icm_chunks from their own slab allocator, which lowered
    the alloc/free "hiccups" frequency.
  - Patch 11: Similar to patch 9, allocate htbl from its own slab allocator.
  - Patch 12: Lower sync threshold for ICM hot memory - set the threshold for
    sync to 1/4 of the pool instead of 1/2 of the pool. Although we will have
    more syncs, each     sync will be shorter and will help with insertion rate
    stability. Also, notice that the overall number of hiccups wasn't increased
    due to all the other patches.
  - Patch 13: Keep track of hot ICM chunks in an array instead of list.
    After steering sync, we traverse the hot list and finally free all the
    chunks. It appears that traversing a long list takes unusually long time
    due to cache misses on many entries, which causes a big "hiccup" during
    rule insertion. This patch replaces the list with pre-allocated array that
    stores only the bookkeeping information that is needed to later free the
    chunks in its buddy allocator.
  - Patch 14: Remove the unneeded buddy used_list - we don't need to have the
    list of used chunks, we only need the total amount of used memory.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEGhZs6bAKwk/OTgTpSD+KveBX+j4FAmNamsEACgkQSD+KveBX
 +j6hWgf/ec6O0Ige4Az9quVtN1YiLRpcA4RJs5prV/2qyqcjzpkumTLpWgFzD6SM
 T7uz/lQY4/JTLAkFNQBE6aynjtFfUP7bJ2LulqE+8QXBmHoHndHA+S3ZBGAmjLgK
 9tY73Bb5qxsHCzEvaab+UxEIWXiBPtaNaw5mkzKKO5ULCplVl1loKxVEmLO1ri7j
 fa7G7I1VHgSg6/7GWPzMN9tsR8b927H9gdRw3atTC91T8jgwf+9YYXmhd4Bj2Dk0
 uB1n4AyVCcLxGZiFtNHUSyBNIvFwaO87DzBDDftIJPSMcJvRJxeBx0692Z7sZdE0
 cfta+4bPDpjHVNN2slcYgzJu/jDh8w==
 =5XcL
 -----END PGP SIGNATURE-----

Merge tag 'mlx5-updates-2022-10-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2022-10-24

SW steering updates from Yevgeny Kliteynik:

1) 1st Four patches: small fixes / optimizations for SW steering:

 - Patch 1: Don't abort destroy flow if failed to destroy table - continue
   and free everything else.
 - Patches 2 and 3 deal with fast teardown:
    + Skip sync during fast teardown, as PCI device is not there any more.
    + Check device state when polling CQ - otherwise SW steering keeps polling
      the CQ forever, because nobody is there to flush it.
 - Patch 4: Removing unneeded function argument.

2) Deal with the hiccups that we get during rules insertion/deletion,
which sometimes reach 1/4 of a second. While insertion/deletion rate
improvement was not the focus here, it still is a by-product of removing these
hiccups.

Another by-product is the reduced standard deviation in measuring the duration
of rules insertion/deletion bursts.

In the testing we add K rules (warm-up phase), and then continuously do
insertion/deletion bursts of N rules.
During the test execution, the driver measures hiccups (amount and duration)
and total time for insertion/deletion of a batch of rules.

Here are some numbers, before and after these patches:

+--------------------------------------------+-----------------+----------------+
|                                            |   Create rules  |  Delete rules  |
|                                            +--------+--------+--------+-------+
|                                            | Before |  After | Before | After |
+--------------------------------------------+--------+--------+--------+-------+
| Max hiccup [msec]                          |    253 |     42 |    254 |    68 |
+--------------------------------------------+--------+--------+--------+-------+
| Avg duration of 10K rules add/remove [msec]| 140.07 | 124.32 | 106.99 | 99.51 |
+--------------------------------------------+--------+--------+--------+-------+
| Num of hiccups per 100K rules add/remove   |   7.77 |   7.97 |  12.60 | 11.57 |
+--------------------------------------------+--------+--------+--------+-------+
| Avg hiccup duration [msec]                 |  36.92 |  33.25 |  36.15 | 33.74 |
+--------------------------------------------+--------+--------+--------+-------+

 - Patch 5: Allocate a short array on stack instead of dynamically- it is
   destroyed at the end of the function.
 - Patch 6: Rather than cleaning the corresponding chunk's section of
   ste_arrays on chunk deletion, initialize these areas upon chunk creation.
   Chunk destruction tend to come in large batches (during pool syncing),
   so instead of doing huge memory initialization during pool sync,
   we amortize this by doing small initsializations on chunk creation.
 - Patch 7: In order to simplifies error flow and allows cleaner addition
   of new pools, handle creation/destruction of all the domain's memory pools
   and other memory-related fields in a separate init/uninit functions.
 - Patch 8: During rehash, write each table row immediately instead of waiting
   for the whole table to be ready and writing it all - saves allocations
   of ste_send_info structures and improves performance.
 - Patch 9: Instead of allocating/freeing send info objects dynamically,
   manage them in pool. The number of send info objects doesn't depend on
   number of rules, so after pre-populating the pool with an initial batch of
   send info objects, the pool is not expected to grow.
   This way we save alloc/free during writing STEs to ICM, which by itself can
   sometimes take up to 40msec.
 - Patch 10: Allocate icm_chunks from their own slab allocator, which lowered
   the alloc/free "hiccups" frequency.
 - Patch 11: Similar to patch 9, allocate htbl from its own slab allocator.
 - Patch 12: Lower sync threshold for ICM hot memory - set the threshold for
   sync to 1/4 of the pool instead of 1/2 of the pool. Although we will have
   more syncs, each     sync will be shorter and will help with insertion rate
   stability. Also, notice that the overall number of hiccups wasn't increased
   due to all the other patches.
 - Patch 13: Keep track of hot ICM chunks in an array instead of list.
   After steering sync, we traverse the hot list and finally free all the
   chunks. It appears that traversing a long list takes unusually long time
   due to cache misses on many entries, which causes a big "hiccup" during
   rule insertion. This patch replaces the list with pre-allocated array that
   stores only the bookkeeping information that is needed to later free the
   chunks in its buddy allocator.
 - Patch 14: Remove the unneeded buddy used_list - we don't need to have the
   list of used chunks, we only need the total amount of used memory.

* tag 'mlx5-updates-2022-10-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  net/mlx5: DR, Remove the buddy used_list
  net/mlx5: DR, Keep track of hot ICM chunks in an array instead of list
  net/mlx5: DR, Lower sync threshold for ICM hot memory
  net/mlx5: DR, Allocate htbl from its own slab allocator
  net/mlx5: DR, Allocate icm_chunks from their own slab allocator
  net/mlx5: DR, Manage STE send info objects in pool
  net/mlx5: DR, In rehash write the line in the entry immediately
  net/mlx5: DR, Handle domain memory resources init/uninit separately
  net/mlx5: DR, Initialize chunk's ste_arrays at chunk creation
  net/mlx5: DR, For short chains of STEs, avoid allocating ste_arr dynamically
  net/mlx5: DR, Remove unneeded argument from dr_icm_chunk_destroy
  net/mlx5: DR, Check device state when polling CQ
  net/mlx5: DR, Fix the SMFS sync_steering for fast teardown
  net/mlx5: DR, In destroy flow, free resources even if FW command failed
====================

Link: https://lore.kernel.org/r/20221027145643.6618-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:07:48 -07:00
Jakub Kicinski
eb288cbde8 Merge branch 'net-ipa-start-adding-ipa-v5-0-functionality'
Alex Elder says:

====================
net: ipa: start adding IPA v5.0 functionality

The biggest change for IPA v5.0 is that it supports more than 32
endpoints.  However there are two other unrelated changes:
  - The STATS_TETHERING memory region is not required
  - Filter tables no longer support a "global" filter

Beyond this, refactoring some code makes supporting more than 32
endpoints (in an upcoming series) easier.  So this series includes
a few other changes (not in this order):
  - The maximum endpoint ID in use is determined during config
  - Loops over all endpoints only involve those in use
  - Endpoints IDs and their directions are checked for validity
    differently to simplify comparison against the maximum
====================

Link: https://lore.kernel.org/r/20221027122632.488694-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:07:05 -07:00
Alex Elder
b7aaff0b01 net: ipa: record and use the number of defined endpoint IDs
Define a new field in the IPA structure that records the maximum
number of entries that will be used in the IPA endpoint array.  Use
that value rather than IPA_ENDPOINT_MAX to determine the end
condition for two loops that iterate over all endpoints.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:48 -07:00
Alex Elder
5274c7158b net: ipa: determine the maximum endpoint ID
Each endpoint ID has an entry in the IPA endpoint array.  But the
size of that array is defined at compile time.  Instead, rename
ipa_endpoint_data_valid() to be ipa_endpoint_max() and have it
return the maximum endpoint ID defined in configuration data.
That function will still validate configuration data.

Zero is returned on error; it's a valid endpoint ID, but we need
more than one, so it can't be the maximum.  The next patch makes use
of the returned maximum value.

Finally, rename the "initialized" mask of endpoints defined by
configuration data to be "defined".

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:47 -07:00
Alex Elder
e359ba89a4 net: ipa: refactor endpoint loops
Change two functions that iterate over all endpoints to use while
loops, using "endpoint_id" as the index variables in both spots.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:47 -07:00
Alex Elder
2b87d72199 net: ipa: more completely check endpoint validity
Ensure all defined TX endpoints are in the range [0, CONS_PIPES) and
defined RX endpoints are within [PROD_LOWEST, PROD_LOWEST+PROD_PIPES).

Modify the way local variables are used to make the checks easier
to understand.  Check for each endpoint being in valid range in the
loop, and drop the logical-AND check of initialized against
unavailable IDs.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:47 -07:00
Alex Elder
bd5524930b net: ipa: no more global filtering starting with IPA v5.0
IPA v5.0 eliminates the global filter table entry.  As a result,
there is no need to shift the filtered endpoint bitmap when it is
written to IPA local memory.

Update comments to explain this.  Also delete a redundant block of
comments above the function.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:46 -07:00
Alex Elder
5ba5faa2e2 net: ipa: change an IPA v5.0 memory requirement
Don't require IPA v5.0 to have a STATS_TETHERING memory region.
Downstream defines its size to 0, so it apparently is unused.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:46 -07:00
Alex Elder
5783c68a25 net: ipa: define IPA v5.0
In preparation for adding support for IPA v5.0, define it as an
understood version.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:06:46 -07:00
Willem de Bruijn
58ba426388 net/packet: add PACKET_FANOUT_FLAG_IGNORE_OUTGOING
Extend packet socket option PACKET_IGNORE_OUTGOING to fanout groups.

The socket option sets ptype.ignore_outgoing, which makes
dev_queue_xmit_nit skip the socket.

When the socket joins a fanout group, the option is not reflected in
the struct ptype of the group. dev_queue_xmit_nit only tests the
fanout ptype, so the flag is ignored once a socket joins a
fanout group.

Inheriting the option from a socket would change established behavior.
Different sockets in the group can set different flags, and can also
change them at runtime.

Testing in packet_rcv_fanout defeats the purpose of the original
patch, which is to avoid skb_clone in dev_queue_xmit_nit (esp. for
MSG_ZEROCOPY packets).

Instead, introduce a new fanout group flag with the same behavior.

Tested with https://github.com/wdebruij/kerneltools/blob/master/tests/test_psock_fanout_ignore_outgoing.c

Signed-off-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20221027211014.3581513-1-willemdebruijn.kernel@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-28 22:00:49 -07:00