mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 12:28:41 +08:00
Networking fixes for 6.0-rc4, including fixes from bluetooth, bpf
and wireless. Current release - regressions: - bpf: - fix wrong last sg check in sk_msg_recvmsg() - fix kernel BUG in purge_effective_progs() - mac80211: - fix possible leak in ieee80211_tx_control_port() - potential NULL dereference in ieee80211_tx_control_port() Current release - new code bugs: - nfp: fix the access to management firmware hanging Previous releases - regressions: - ip: fix triggering of 'icmp redirect' - sched: tbf: don't call qdisc_put() while holding tree lock - bpf: fix corrupted packets for XDP_SHARED_UMEM - bluetooth: hci_sync: fix suspend performance regression - micrel: fix probe failure Previous releases - always broken: - tcp: make global challenge ack rate limitation per net-ns and default disabled - tg3: fix potential hang-up on system reboot - mac802154: fix reception for no-daddr packets Misc: - r8152: add PID for the lenovo onelink+ dock Signed-off-by: Paolo Abeni <pabeni@redhat.com> -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmMQda0SHHBhYmVuaUBy ZWRoYXQuY29tAAoJECkkeY3MjxOk2eAQAJHZNo2CiN8dmVrT/e3Fc3GMMPhVIAHO lOjIUHIrV5BtsedhSrzAVTviMxVxC4CXAE8pJcE+5Y8MMygQYxZ3QF/93SSLFDKn zvhA1KizjmS7k2m7DNlS61aTwwPFBwc7dv388LrSUFdH0ZZfot+UXfzq4O8RSBUe mlYYLsiSRW5lUvu6j9hMSWn8D/A2k+BboA6Q1Q+PgK1tIpuEuv1gGg8IeV23xkfa hKLpZjtbrYPdGMKLMzmI5Ww4bqctZtCbPedSqBqydpmCyRsO/07G4fJLRffYsbSy nSREYF1QNSry/caR9KYHj602IwNywneIHV3cAO3B/ETFzThPkOmJbu2Em621G7+Z 1HpWmser7eiHDz0rDYLQlFr/ZYcSF4TwoNH4ha9hiKRpnHTZgD0USudLG+vvTNs5 DgGCAzJpdxI8Erks8Em9pYGEtKczZRp5MT+pZR+AAYkkryYANV6043+Xxbadal73 CsVXODmHmmCSG346juOubujDLADUyS+RWf2eMIFy289CRUHpGbZQ8Ai2UM3dqaX1 mgFpEAhJ78rmNBv8pVrKSJjE4Bx2s3hzgEe8tk9DHWCrODAAL490wzpMsVGvW+lz jTs2XNJ7MRDqV3KqMnZKlw0ESc0nSHz7BCztCbRQXfg6PxsIOTGD6ZB5kPQOHjU5 XP3Y5g3775az =doxx -----END PGP SIGNATURE----- Merge tag 'net-6.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from bluetooth, bpf and wireless. Current release - regressions: - bpf: - fix wrong last sg check in sk_msg_recvmsg() - fix kernel BUG in purge_effective_progs() - mac80211: - fix possible leak in ieee80211_tx_control_port() - potential NULL dereference in ieee80211_tx_control_port() Current release - new code bugs: - nfp: fix the access to management firmware hanging Previous releases - regressions: - ip: fix triggering of 'icmp redirect' - sched: tbf: don't call qdisc_put() while holding tree lock - bpf: fix corrupted packets for XDP_SHARED_UMEM - bluetooth: hci_sync: fix suspend performance regression - micrel: fix probe failure Previous releases - always broken: - tcp: make global challenge ack rate limitation per net-ns and default disabled - tg3: fix potential hang-up on system reboot - mac802154: fix reception for no-daddr packets Misc: - r8152: add PID for the lenovo onelink+ dock" * tag 'net-6.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (56 commits) net/smc: Remove redundant refcount increase Revert "sch_cake: Return __NET_XMIT_STOLEN when consuming enqueued skb" tcp: make global challenge ack rate limitation per net-ns and default disabled tcp: annotate data-race around challenge_timestamp net: dsa: hellcreek: Print warning only once ip: fix triggering of 'icmp redirect' sch_cake: Return __NET_XMIT_STOLEN when consuming enqueued skb selftests: net: sort .gitignore file Documentation: networking: correct possessive "its" kcm: fix strp_init() order and cleanup mlxbf_gige: compute MDIO period based on i1clk ethernet: rocker: fix sleep in atomic context bug in neigh_timer_handler net: lan966x: improve error handle in lan966x_fdma_rx_get_frame() nfp: fix the access to management firmware hanging net: phy: micrel: Make the GPIO to be non-exclusive net: virtio_net: fix notification coalescing comments net/sched: fix netdevice reference leaks in attach_default_qdiscs() net: sched: tbf: don't call qdisc_put() while holding tree lock net: Use u64_stats_fetch_begin_irq() for stats fetch. net: dsa: xrs700x: Use irqsave variant for u64 stats update ...
This commit is contained in:
commit
42e66b1cc3
@ -67,7 +67,7 @@ The ``netdevsim`` driver supports rate objects management, which includes:
|
||||
- setting tx_share and tx_max rate values for any rate object type;
|
||||
- setting parent node for any rate object type.
|
||||
|
||||
Rate nodes and it's parameters are exposed in ``netdevsim`` debugfs in RO mode.
|
||||
Rate nodes and their parameters are exposed in ``netdevsim`` debugfs in RO mode.
|
||||
For example created rate node with name ``some_group``:
|
||||
|
||||
.. code:: shell
|
||||
|
@ -8,7 +8,7 @@ Transmit path guidelines:
|
||||
|
||||
1) The ndo_start_xmit method must not return NETDEV_TX_BUSY under
|
||||
any normal circumstances. It is considered a hard error unless
|
||||
there is no way your device can tell ahead of time when it's
|
||||
there is no way your device can tell ahead of time when its
|
||||
transmit function will become busy.
|
||||
|
||||
Instead it must maintain the queue properly. For example,
|
||||
|
@ -1035,7 +1035,10 @@ tcp_limit_output_bytes - INTEGER
|
||||
tcp_challenge_ack_limit - INTEGER
|
||||
Limits number of Challenge ACK sent per second, as recommended
|
||||
in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks)
|
||||
Default: 1000
|
||||
Note that this per netns rate limit can allow some side channel
|
||||
attacks and probably should not be enabled.
|
||||
TCP stack implements per TCP socket limits anyway.
|
||||
Default: INT_MAX (unlimited)
|
||||
|
||||
UDP variables
|
||||
=============
|
||||
|
@ -11,7 +11,7 @@ Initial Release:
|
||||
================
|
||||
This is conceptually very similar to the macvlan driver with one major
|
||||
exception of using L3 for mux-ing /demux-ing among slaves. This property makes
|
||||
the master device share the L2 with it's slave devices. I have developed this
|
||||
the master device share the L2 with its slave devices. I have developed this
|
||||
driver in conjunction with network namespaces and not sure if there is use case
|
||||
outside of it.
|
||||
|
||||
|
@ -530,7 +530,7 @@ its tunnel close actions. For L2TPIP sockets, the socket's close
|
||||
handler initiates the same tunnel close actions. All sessions are
|
||||
first closed. Each session drops its tunnel ref. When the tunnel ref
|
||||
reaches zero, the tunnel puts its socket ref. When the socket is
|
||||
eventually destroyed, it's sk_destruct finally frees the L2TP tunnel
|
||||
eventually destroyed, its sk_destruct finally frees the L2TP tunnel
|
||||
context.
|
||||
|
||||
Sessions
|
||||
|
@ -159,7 +159,7 @@ tools such as iproute2.
|
||||
|
||||
The switchdev driver can know a particular port's position in the topology by
|
||||
monitoring NETDEV_CHANGEUPPER notifications. For example, a port moved into a
|
||||
bond will see it's upper master change. If that bond is moved into a bridge,
|
||||
bond will see its upper master change. If that bond is moved into a bridge,
|
||||
the bond's upper master will change. And so on. The driver will track such
|
||||
movements to know what position a port is in in the overall topology by
|
||||
registering for netdevice events and acting on NETDEV_CHANGEUPPER.
|
||||
|
@ -109,6 +109,7 @@ static void xrs700x_read_port_counters(struct xrs700x *priv, int port)
|
||||
{
|
||||
struct xrs700x_port *p = &priv->ports[port];
|
||||
struct rtnl_link_stats64 stats;
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
memset(&stats, 0, sizeof(stats));
|
||||
@ -138,9 +139,9 @@ static void xrs700x_read_port_counters(struct xrs700x *priv, int port)
|
||||
*/
|
||||
stats.rx_packets += stats.multicast;
|
||||
|
||||
u64_stats_update_begin(&p->syncp);
|
||||
flags = u64_stats_update_begin_irqsave(&p->syncp);
|
||||
p->stats64 = stats;
|
||||
u64_stats_update_end(&p->syncp);
|
||||
u64_stats_update_end_irqrestore(&p->syncp, flags);
|
||||
|
||||
mutex_unlock(&p->mib_mutex);
|
||||
}
|
||||
|
@ -18076,16 +18076,20 @@ static void tg3_shutdown(struct pci_dev *pdev)
|
||||
struct net_device *dev = pci_get_drvdata(pdev);
|
||||
struct tg3 *tp = netdev_priv(dev);
|
||||
|
||||
tg3_reset_task_cancel(tp);
|
||||
|
||||
rtnl_lock();
|
||||
|
||||
netif_device_detach(dev);
|
||||
|
||||
if (netif_running(dev))
|
||||
dev_close(dev);
|
||||
|
||||
if (system_state == SYSTEM_POWER_OFF)
|
||||
tg3_power_down(tp);
|
||||
tg3_power_down(tp);
|
||||
|
||||
rtnl_unlock();
|
||||
|
||||
pci_disable_device(pdev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1919,7 +1919,7 @@ static void gmac_get_stats64(struct net_device *netdev,
|
||||
|
||||
/* Racing with RX NAPI */
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&port->rx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
|
||||
|
||||
stats->rx_packets = port->stats.rx_packets;
|
||||
stats->rx_bytes = port->stats.rx_bytes;
|
||||
@ -1931,11 +1931,11 @@ static void gmac_get_stats64(struct net_device *netdev,
|
||||
stats->rx_crc_errors = port->stats.rx_crc_errors;
|
||||
stats->rx_frame_errors = port->stats.rx_frame_errors;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
|
||||
|
||||
/* Racing with MIB and TX completion interrupts */
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&port->ir_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
|
||||
|
||||
stats->tx_errors = port->stats.tx_errors;
|
||||
stats->tx_packets = port->stats.tx_packets;
|
||||
@ -1945,15 +1945,15 @@ static void gmac_get_stats64(struct net_device *netdev,
|
||||
stats->rx_missed_errors = port->stats.rx_missed_errors;
|
||||
stats->rx_fifo_errors = port->stats.rx_fifo_errors;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
|
||||
|
||||
/* Racing with hard_start_xmit */
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&port->tx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
|
||||
|
||||
stats->tx_dropped = port->stats.tx_dropped;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
|
||||
|
||||
stats->rx_dropped += stats->rx_missed_errors;
|
||||
}
|
||||
@ -2031,18 +2031,18 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
|
||||
/* Racing with MIB interrupt */
|
||||
do {
|
||||
p = values;
|
||||
start = u64_stats_fetch_begin(&port->ir_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
|
||||
|
||||
for (i = 0; i < RX_STATS_NUM; i++)
|
||||
*p++ = port->hw_stats[i];
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
|
||||
values = p;
|
||||
|
||||
/* Racing with RX NAPI */
|
||||
do {
|
||||
p = values;
|
||||
start = u64_stats_fetch_begin(&port->rx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
|
||||
|
||||
for (i = 0; i < RX_STATUS_NUM; i++)
|
||||
*p++ = port->rx_stats[i];
|
||||
@ -2050,13 +2050,13 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
|
||||
*p++ = port->rx_csum_stats[i];
|
||||
*p++ = port->rx_napi_exits;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
|
||||
values = p;
|
||||
|
||||
/* Racing with TX start_xmit */
|
||||
do {
|
||||
p = values;
|
||||
start = u64_stats_fetch_begin(&port->tx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
|
||||
|
||||
for (i = 0; i < TX_MAX_FRAGS; i++) {
|
||||
*values++ = port->tx_frag_stats[i];
|
||||
@ -2065,7 +2065,7 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
|
||||
*values++ = port->tx_frags_linearized;
|
||||
*values++ = port->tx_hw_csummed;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
|
||||
}
|
||||
|
||||
static int gmac_get_ksettings(struct net_device *netdev,
|
||||
|
@ -206,9 +206,9 @@ struct funeth_rxq {
|
||||
|
||||
#define FUN_QSTAT_READ(q, seq, stats_copy) \
|
||||
do { \
|
||||
seq = u64_stats_fetch_begin(&(q)->syncp); \
|
||||
seq = u64_stats_fetch_begin_irq(&(q)->syncp); \
|
||||
stats_copy = (q)->stats; \
|
||||
} while (u64_stats_fetch_retry(&(q)->syncp, (seq)))
|
||||
} while (u64_stats_fetch_retry_irq(&(q)->syncp, (seq)))
|
||||
|
||||
#define FUN_INT_NAME_LEN (IFNAMSIZ + 16)
|
||||
|
||||
|
@ -177,14 +177,14 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
struct gve_rx_ring *rx = &priv->rx[ring];
|
||||
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
|
||||
tmp_rx_pkts = rx->rpackets;
|
||||
tmp_rx_bytes = rx->rbytes;
|
||||
tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
|
||||
tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
|
||||
tmp_rx_desc_err_dropped_pkt =
|
||||
rx->rx_desc_err_dropped_pkt;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
|
||||
start));
|
||||
rx_pkts += tmp_rx_pkts;
|
||||
rx_bytes += tmp_rx_bytes;
|
||||
@ -198,10 +198,10 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
if (priv->tx) {
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
|
||||
tmp_tx_pkts = priv->tx[ring].pkt_done;
|
||||
tmp_tx_bytes = priv->tx[ring].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
|
||||
start));
|
||||
tx_pkts += tmp_tx_pkts;
|
||||
tx_bytes += tmp_tx_bytes;
|
||||
@ -259,13 +259,13 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
data[i++] = rx->fill_cnt - rx->cnt;
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
|
||||
tmp_rx_bytes = rx->rbytes;
|
||||
tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
|
||||
tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
|
||||
tmp_rx_desc_err_dropped_pkt =
|
||||
rx->rx_desc_err_dropped_pkt;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
|
||||
start));
|
||||
data[i++] = tmp_rx_bytes;
|
||||
data[i++] = rx->rx_cont_packet_cnt;
|
||||
@ -331,9 +331,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
}
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
|
||||
tmp_tx_bytes = tx->bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
|
||||
start));
|
||||
data[i++] = tmp_tx_bytes;
|
||||
data[i++] = tx->wake_queue;
|
||||
|
@ -51,10 +51,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
|
||||
packets = priv->rx[ring].rpackets;
|
||||
bytes = priv->rx[ring].rbytes;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
|
||||
start));
|
||||
s->rx_packets += packets;
|
||||
s->rx_bytes += bytes;
|
||||
@ -64,10 +64,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
|
||||
packets = priv->tx[ring].pkt_done;
|
||||
bytes = priv->tx[ring].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
|
||||
start));
|
||||
s->tx_packets += packets;
|
||||
s->tx_bytes += bytes;
|
||||
@ -1274,9 +1274,9 @@ void gve_handle_report_stats(struct gve_priv *priv)
|
||||
}
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&priv->tx[idx].statss);
|
||||
start = u64_stats_fetch_begin_irq(&priv->tx[idx].statss);
|
||||
tx_bytes = priv->tx[idx].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[idx].statss, start));
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[idx].statss, start));
|
||||
stats[stats_idx++] = (struct stats) {
|
||||
.stat_name = cpu_to_be32(TX_WAKE_CNT),
|
||||
.value = cpu_to_be64(priv->tx[idx].wake_queue),
|
||||
|
@ -74,14 +74,14 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&rxq_stats->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&rxq_stats->syncp);
|
||||
stats->pkts = rxq_stats->pkts;
|
||||
stats->bytes = rxq_stats->bytes;
|
||||
stats->errors = rxq_stats->csum_errors +
|
||||
rxq_stats->other_errors;
|
||||
stats->csum_errors = rxq_stats->csum_errors;
|
||||
stats->other_errors = rxq_stats->other_errors;
|
||||
} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&rxq_stats->syncp, start));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -99,14 +99,14 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&txq_stats->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&txq_stats->syncp);
|
||||
stats->pkts = txq_stats->pkts;
|
||||
stats->bytes = txq_stats->bytes;
|
||||
stats->tx_busy = txq_stats->tx_busy;
|
||||
stats->tx_wake = txq_stats->tx_wake;
|
||||
stats->tx_dropped = txq_stats->tx_dropped;
|
||||
stats->big_frags_pkts = txq_stats->big_frags_pkts;
|
||||
} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&txq_stats->syncp, start));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -75,6 +75,7 @@ struct mlxbf_gige {
|
||||
struct net_device *netdev;
|
||||
struct platform_device *pdev;
|
||||
void __iomem *mdio_io;
|
||||
void __iomem *clk_io;
|
||||
struct mii_bus *mdiobus;
|
||||
spinlock_t lock; /* for packet processing indices */
|
||||
u16 rx_q_entries;
|
||||
@ -137,7 +138,8 @@ enum mlxbf_gige_res {
|
||||
MLXBF_GIGE_RES_MDIO9,
|
||||
MLXBF_GIGE_RES_GPIO0,
|
||||
MLXBF_GIGE_RES_LLU,
|
||||
MLXBF_GIGE_RES_PLU
|
||||
MLXBF_GIGE_RES_PLU,
|
||||
MLXBF_GIGE_RES_CLK
|
||||
};
|
||||
|
||||
/* Version of register data returned by mlxbf_gige_get_regs() */
|
||||
|
@ -22,10 +22,23 @@
|
||||
#include <linux/property.h>
|
||||
|
||||
#include "mlxbf_gige.h"
|
||||
#include "mlxbf_gige_regs.h"
|
||||
|
||||
#define MLXBF_GIGE_MDIO_GW_OFFSET 0x0
|
||||
#define MLXBF_GIGE_MDIO_CFG_OFFSET 0x4
|
||||
|
||||
#define MLXBF_GIGE_MDIO_FREQ_REFERENCE 156250000ULL
|
||||
#define MLXBF_GIGE_MDIO_COREPLL_CONST 16384ULL
|
||||
#define MLXBF_GIGE_MDC_CLK_NS 400
|
||||
#define MLXBF_GIGE_MDIO_PLL_I1CLK_REG1 0x4
|
||||
#define MLXBF_GIGE_MDIO_PLL_I1CLK_REG2 0x8
|
||||
#define MLXBF_GIGE_MDIO_CORE_F_SHIFT 0
|
||||
#define MLXBF_GIGE_MDIO_CORE_F_MASK GENMASK(25, 0)
|
||||
#define MLXBF_GIGE_MDIO_CORE_R_SHIFT 26
|
||||
#define MLXBF_GIGE_MDIO_CORE_R_MASK GENMASK(31, 26)
|
||||
#define MLXBF_GIGE_MDIO_CORE_OD_SHIFT 0
|
||||
#define MLXBF_GIGE_MDIO_CORE_OD_MASK GENMASK(3, 0)
|
||||
|
||||
/* Support clause 22 */
|
||||
#define MLXBF_GIGE_MDIO_CL22_ST1 0x1
|
||||
#define MLXBF_GIGE_MDIO_CL22_WRITE 0x1
|
||||
@ -50,28 +63,77 @@
|
||||
#define MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(23, 16)
|
||||
#define MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(31, 24)
|
||||
|
||||
/* Formula for encoding the MDIO period. The encoded value is
|
||||
* passed to the MDIO config register.
|
||||
*
|
||||
* mdc_clk = 2*(val + 1)*i1clk
|
||||
*
|
||||
* 400 ns = 2*(val + 1)*(((1/430)*1000) ns)
|
||||
*
|
||||
* val = (((400 * 430 / 1000) / 2) - 1)
|
||||
*/
|
||||
#define MLXBF_GIGE_I1CLK_MHZ 430
|
||||
#define MLXBF_GIGE_MDC_CLK_NS 400
|
||||
|
||||
#define MLXBF_GIGE_MDIO_PERIOD (((MLXBF_GIGE_MDC_CLK_NS * MLXBF_GIGE_I1CLK_MHZ / 1000) / 2) - 1)
|
||||
|
||||
#define MLXBF_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \
|
||||
FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \
|
||||
FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \
|
||||
FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK, \
|
||||
MLXBF_GIGE_MDIO_PERIOD) | \
|
||||
FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \
|
||||
FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13))
|
||||
|
||||
#define MLXBF_GIGE_BF2_COREPLL_ADDR 0x02800c30
|
||||
#define MLXBF_GIGE_BF2_COREPLL_SIZE 0x0000000c
|
||||
|
||||
static struct resource corepll_params[] = {
|
||||
[MLXBF_GIGE_VERSION_BF2] = {
|
||||
.start = MLXBF_GIGE_BF2_COREPLL_ADDR,
|
||||
.end = MLXBF_GIGE_BF2_COREPLL_ADDR + MLXBF_GIGE_BF2_COREPLL_SIZE - 1,
|
||||
.name = "COREPLL_RES"
|
||||
},
|
||||
};
|
||||
|
||||
/* Returns core clock i1clk in Hz */
|
||||
static u64 calculate_i1clk(struct mlxbf_gige *priv)
|
||||
{
|
||||
u8 core_od, core_r;
|
||||
u64 freq_output;
|
||||
u32 reg1, reg2;
|
||||
u32 core_f;
|
||||
|
||||
reg1 = readl(priv->clk_io + MLXBF_GIGE_MDIO_PLL_I1CLK_REG1);
|
||||
reg2 = readl(priv->clk_io + MLXBF_GIGE_MDIO_PLL_I1CLK_REG2);
|
||||
|
||||
core_f = (reg1 & MLXBF_GIGE_MDIO_CORE_F_MASK) >>
|
||||
MLXBF_GIGE_MDIO_CORE_F_SHIFT;
|
||||
core_r = (reg1 & MLXBF_GIGE_MDIO_CORE_R_MASK) >>
|
||||
MLXBF_GIGE_MDIO_CORE_R_SHIFT;
|
||||
core_od = (reg2 & MLXBF_GIGE_MDIO_CORE_OD_MASK) >>
|
||||
MLXBF_GIGE_MDIO_CORE_OD_SHIFT;
|
||||
|
||||
/* Compute PLL output frequency as follow:
|
||||
*
|
||||
* CORE_F / 16384
|
||||
* freq_output = freq_reference * ----------------------------
|
||||
* (CORE_R + 1) * (CORE_OD + 1)
|
||||
*/
|
||||
freq_output = div_u64((MLXBF_GIGE_MDIO_FREQ_REFERENCE * core_f),
|
||||
MLXBF_GIGE_MDIO_COREPLL_CONST);
|
||||
freq_output = div_u64(freq_output, (core_r + 1) * (core_od + 1));
|
||||
|
||||
return freq_output;
|
||||
}
|
||||
|
||||
/* Formula for encoding the MDIO period. The encoded value is
|
||||
* passed to the MDIO config register.
|
||||
*
|
||||
* mdc_clk = 2*(val + 1)*(core clock in sec)
|
||||
*
|
||||
* i1clk is in Hz:
|
||||
* 400 ns = 2*(val + 1)*(1/i1clk)
|
||||
*
|
||||
* val = (((400/10^9) / (1/i1clk) / 2) - 1)
|
||||
* val = (400/2 * i1clk)/10^9 - 1
|
||||
*/
|
||||
static u8 mdio_period_map(struct mlxbf_gige *priv)
|
||||
{
|
||||
u8 mdio_period;
|
||||
u64 i1clk;
|
||||
|
||||
i1clk = calculate_i1clk(priv);
|
||||
|
||||
mdio_period = div_u64((MLXBF_GIGE_MDC_CLK_NS >> 1) * i1clk, 1000000000) - 1;
|
||||
|
||||
return mdio_period;
|
||||
}
|
||||
|
||||
static u32 mlxbf_gige_mdio_create_cmd(u16 data, int phy_add,
|
||||
int phy_reg, u32 opcode)
|
||||
{
|
||||
@ -124,9 +186,9 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
|
||||
int phy_reg, u16 val)
|
||||
{
|
||||
struct mlxbf_gige *priv = bus->priv;
|
||||
u32 temp;
|
||||
u32 cmd;
|
||||
int ret;
|
||||
u32 temp;
|
||||
|
||||
if (phy_reg & MII_ADDR_C45)
|
||||
return -EOPNOTSUPP;
|
||||
@ -144,18 +206,44 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void mlxbf_gige_mdio_cfg(struct mlxbf_gige *priv)
|
||||
{
|
||||
u8 mdio_period;
|
||||
u32 val;
|
||||
|
||||
mdio_period = mdio_period_map(priv);
|
||||
|
||||
val = MLXBF_GIGE_MDIO_CFG_VAL;
|
||||
val |= FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period);
|
||||
writel(val, priv->mdio_io + MLXBF_GIGE_MDIO_CFG_OFFSET);
|
||||
}
|
||||
|
||||
int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct resource *res;
|
||||
int ret;
|
||||
|
||||
priv->mdio_io = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MDIO9);
|
||||
if (IS_ERR(priv->mdio_io))
|
||||
return PTR_ERR(priv->mdio_io);
|
||||
|
||||
/* Configure mdio parameters */
|
||||
writel(MLXBF_GIGE_MDIO_CFG_VAL,
|
||||
priv->mdio_io + MLXBF_GIGE_MDIO_CFG_OFFSET);
|
||||
/* clk resource shared with other drivers so cannot use
|
||||
* devm_platform_ioremap_resource
|
||||
*/
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, MLXBF_GIGE_RES_CLK);
|
||||
if (!res) {
|
||||
/* For backward compatibility with older ACPI tables, also keep
|
||||
* CLK resource internal to the driver.
|
||||
*/
|
||||
res = &corepll_params[MLXBF_GIGE_VERSION_BF2];
|
||||
}
|
||||
|
||||
priv->clk_io = devm_ioremap(dev, res->start, resource_size(res));
|
||||
if (IS_ERR(priv->clk_io))
|
||||
return PTR_ERR(priv->clk_io);
|
||||
|
||||
mlxbf_gige_mdio_cfg(priv);
|
||||
|
||||
priv->mdiobus = devm_mdiobus_alloc(dev);
|
||||
if (!priv->mdiobus) {
|
||||
|
@ -8,6 +8,8 @@
|
||||
#ifndef __MLXBF_GIGE_REGS_H__
|
||||
#define __MLXBF_GIGE_REGS_H__
|
||||
|
||||
#define MLXBF_GIGE_VERSION 0x0000
|
||||
#define MLXBF_GIGE_VERSION_BF2 0x0
|
||||
#define MLXBF_GIGE_STATUS 0x0010
|
||||
#define MLXBF_GIGE_STATUS_READY BIT(0)
|
||||
#define MLXBF_GIGE_INT_STATUS 0x0028
|
||||
|
@ -423,7 +423,8 @@ mlxsw_sp_span_gretap4_route(const struct net_device *to_dev,
|
||||
|
||||
parms = mlxsw_sp_ipip_netdev_parms4(to_dev);
|
||||
ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp,
|
||||
0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0);
|
||||
0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0,
|
||||
0);
|
||||
|
||||
rt = ip_route_output_key(tun->net, &fl4);
|
||||
if (IS_ERR(rt))
|
||||
|
@ -425,7 +425,8 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx)
|
||||
lan966x_ifh_get_src_port(skb->data, &src_port);
|
||||
lan966x_ifh_get_timestamp(skb->data, ×tamp);
|
||||
|
||||
WARN_ON(src_port >= lan966x->num_phys_ports);
|
||||
if (WARN_ON(src_port >= lan966x->num_phys_ports))
|
||||
goto free_skb;
|
||||
|
||||
skb->dev = lan966x->ports[src_port]->dev;
|
||||
skb_pull(skb, IFH_LEN * sizeof(u32));
|
||||
@ -449,6 +450,8 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx)
|
||||
|
||||
return skb;
|
||||
|
||||
free_skb:
|
||||
kfree_skb(skb);
|
||||
unmap_page:
|
||||
dma_unmap_page(lan966x->dev, (dma_addr_t)db->dataptr,
|
||||
FDMA_DCB_STATUS_BLOCKL(db->status),
|
||||
|
@ -113,6 +113,8 @@ static void sparx5_xtr_grp(struct sparx5 *sparx5, u8 grp, bool byte_swap)
|
||||
/* This assumes STATUS_WORD_POS == 1, Status
|
||||
* just after last data
|
||||
*/
|
||||
if (!byte_swap)
|
||||
val = ntohl((__force __be32)val);
|
||||
byte_cnt -= (4 - XTR_VALID_BYTES(val));
|
||||
eof_flag = true;
|
||||
break;
|
||||
|
@ -127,10 +127,11 @@ static int nfp_policer_validate(const struct flow_action *action,
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
|
||||
if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE &&
|
||||
act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
|
||||
act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {
|
||||
NL_SET_ERR_MSG_MOD(extack,
|
||||
"Offload not supported when conform action is not pipe or ok");
|
||||
"Offload not supported when conform action is not continue, pipe or ok");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
|
@ -1630,21 +1630,21 @@ static void nfp_net_stat64(struct net_device *netdev,
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&r_vec->rx_sync);
|
||||
start = u64_stats_fetch_begin_irq(&r_vec->rx_sync);
|
||||
data[0] = r_vec->rx_pkts;
|
||||
data[1] = r_vec->rx_bytes;
|
||||
data[2] = r_vec->rx_drops;
|
||||
} while (u64_stats_fetch_retry(&r_vec->rx_sync, start));
|
||||
} while (u64_stats_fetch_retry_irq(&r_vec->rx_sync, start));
|
||||
stats->rx_packets += data[0];
|
||||
stats->rx_bytes += data[1];
|
||||
stats->rx_dropped += data[2];
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&r_vec->tx_sync);
|
||||
start = u64_stats_fetch_begin_irq(&r_vec->tx_sync);
|
||||
data[0] = r_vec->tx_pkts;
|
||||
data[1] = r_vec->tx_bytes;
|
||||
data[2] = r_vec->tx_errors;
|
||||
} while (u64_stats_fetch_retry(&r_vec->tx_sync, start));
|
||||
} while (u64_stats_fetch_retry_irq(&r_vec->tx_sync, start));
|
||||
stats->tx_packets += data[0];
|
||||
stats->tx_bytes += data[1];
|
||||
stats->tx_errors += data[2];
|
||||
|
@ -649,7 +649,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&nn->r_vecs[i].rx_sync);
|
||||
start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].rx_sync);
|
||||
data[0] = nn->r_vecs[i].rx_pkts;
|
||||
tmp[0] = nn->r_vecs[i].hw_csum_rx_ok;
|
||||
tmp[1] = nn->r_vecs[i].hw_csum_rx_inner_ok;
|
||||
@ -657,10 +657,10 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
|
||||
tmp[3] = nn->r_vecs[i].hw_csum_rx_error;
|
||||
tmp[4] = nn->r_vecs[i].rx_replace_buf_alloc_fail;
|
||||
tmp[5] = nn->r_vecs[i].hw_tls_rx;
|
||||
} while (u64_stats_fetch_retry(&nn->r_vecs[i].rx_sync, start));
|
||||
} while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].rx_sync, start));
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&nn->r_vecs[i].tx_sync);
|
||||
start = u64_stats_fetch_begin_irq(&nn->r_vecs[i].tx_sync);
|
||||
data[1] = nn->r_vecs[i].tx_pkts;
|
||||
data[2] = nn->r_vecs[i].tx_busy;
|
||||
tmp[6] = nn->r_vecs[i].hw_csum_tx;
|
||||
@ -670,7 +670,7 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
|
||||
tmp[10] = nn->r_vecs[i].hw_tls_tx;
|
||||
tmp[11] = nn->r_vecs[i].tls_tx_fallback;
|
||||
tmp[12] = nn->r_vecs[i].tls_tx_no_fallback;
|
||||
} while (u64_stats_fetch_retry(&nn->r_vecs[i].tx_sync, start));
|
||||
} while (u64_stats_fetch_retry_irq(&nn->r_vecs[i].tx_sync, start));
|
||||
|
||||
data += NN_RVEC_PER_Q_STATS;
|
||||
|
||||
|
@ -507,6 +507,7 @@ int nfp_eth_set_idmode(struct nfp_cpp *cpp, unsigned int idx, bool state)
|
||||
if (nfp_nsp_get_abi_ver_minor(nsp) < 32) {
|
||||
nfp_err(nfp_nsp_cpp(nsp),
|
||||
"set id mode operation not supported, please update flash\n");
|
||||
nfp_eth_config_cleanup_end(nsp);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
|
@ -1273,7 +1273,7 @@ static int ofdpa_port_ipv4_neigh(struct ofdpa_port *ofdpa_port,
|
||||
bool removing;
|
||||
int err = 0;
|
||||
|
||||
entry = kzalloc(sizeof(*entry), GFP_KERNEL);
|
||||
entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
|
||||
if (!entry)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -1037,6 +1037,8 @@ static int smsc911x_mii_probe(struct net_device *dev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Indicate that the MAC is responsible for managing PHY PM */
|
||||
phydev->mac_managed_pm = true;
|
||||
phy_attached_info(phydev);
|
||||
|
||||
phy_set_max_speed(phydev, SPEED_100);
|
||||
@ -2587,6 +2589,8 @@ static int smsc911x_suspend(struct device *dev)
|
||||
if (netif_running(ndev)) {
|
||||
netif_stop_queue(ndev);
|
||||
netif_device_detach(ndev);
|
||||
if (!device_may_wakeup(dev))
|
||||
phy_stop(ndev->phydev);
|
||||
}
|
||||
|
||||
/* enable wake on LAN, energy detection and the external PME
|
||||
@ -2628,6 +2632,8 @@ static int smsc911x_resume(struct device *dev)
|
||||
if (netif_running(ndev)) {
|
||||
netif_device_attach(ndev);
|
||||
netif_start_queue(ndev);
|
||||
if (!device_may_wakeup(dev))
|
||||
phy_start(ndev->phydev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1310,10 +1310,11 @@ static void adf7242_remove(struct spi_device *spi)
|
||||
|
||||
debugfs_remove_recursive(lp->debugfs_root);
|
||||
|
||||
ieee802154_unregister_hw(lp->hw);
|
||||
|
||||
cancel_delayed_work_sync(&lp->work);
|
||||
destroy_workqueue(lp->wqueue);
|
||||
|
||||
ieee802154_unregister_hw(lp->hw);
|
||||
mutex_destroy(&lp->bmux);
|
||||
ieee802154_free_hw(lp->hw);
|
||||
}
|
||||
|
@ -2293,7 +2293,7 @@ static int ca8210_set_csma_params(
|
||||
* @retries: Number of retries
|
||||
*
|
||||
* Sets the number of times to retry a transmission if no acknowledgment was
|
||||
* was received from the other end when one was requested.
|
||||
* received from the other end when one was requested.
|
||||
*
|
||||
* Return: 0 or linux error code
|
||||
*/
|
||||
|
@ -504,6 +504,7 @@ cc2520_tx(struct ieee802154_hw *hw, struct sk_buff *skb)
|
||||
goto err_tx;
|
||||
|
||||
if (status & CC2520_STATUS_TX_UNDERFLOW) {
|
||||
rc = -EINVAL;
|
||||
dev_err(&priv->spi->dev, "cc2520 tx underflow exception\n");
|
||||
goto err_tx;
|
||||
}
|
||||
|
@ -67,10 +67,10 @@ nsim_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&ns->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&ns->syncp);
|
||||
stats->tx_bytes = ns->tx_bytes;
|
||||
stats->tx_packets = ns->tx_packets;
|
||||
} while (u64_stats_fetch_retry(&ns->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&ns->syncp, start));
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -2873,12 +2873,18 @@ static int lan8814_config_init(struct phy_device *phydev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* It is expected that there will not be any 'lan8814_take_coma_mode'
|
||||
* function called in suspend. Because the GPIO line can be shared, so if one of
|
||||
* the phys goes back in coma mode, then all the other PHYs will go, which is
|
||||
* wrong.
|
||||
*/
|
||||
static int lan8814_release_coma_mode(struct phy_device *phydev)
|
||||
{
|
||||
struct gpio_desc *gpiod;
|
||||
|
||||
gpiod = devm_gpiod_get_optional(&phydev->mdio.dev, "coma-mode",
|
||||
GPIOD_OUT_HIGH_OPEN_DRAIN);
|
||||
GPIOD_OUT_HIGH_OPEN_DRAIN |
|
||||
GPIOD_FLAGS_BIT_NONEXCLUSIVE);
|
||||
if (IS_ERR(gpiod))
|
||||
return PTR_ERR(gpiod);
|
||||
|
||||
|
@ -777,6 +777,13 @@ static const struct usb_device_id products[] = {
|
||||
},
|
||||
#endif
|
||||
|
||||
/* Lenovo ThinkPad OneLink+ Dock (based on Realtek RTL8153) */
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3054, USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||
.driver_info = 0,
|
||||
},
|
||||
|
||||
/* ThinkPad USB-C Dock (based on Realtek RTL8153) */
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3062, USB_CLASS_COMM,
|
||||
|
@ -770,6 +770,7 @@ enum rtl8152_flags {
|
||||
RX_EPROTO,
|
||||
};
|
||||
|
||||
#define DEVICE_ID_THINKPAD_ONELINK_PLUS_DOCK 0x3054
|
||||
#define DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2 0x3082
|
||||
#define DEVICE_ID_THINKPAD_USB_C_DONGLE 0x720c
|
||||
#define DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2 0xa387
|
||||
@ -9581,6 +9582,7 @@ static bool rtl8152_supports_lenovo_macpassthru(struct usb_device *udev)
|
||||
|
||||
if (vendor_id == VENDOR_ID_LENOVO) {
|
||||
switch (product_id) {
|
||||
case DEVICE_ID_THINKPAD_ONELINK_PLUS_DOCK:
|
||||
case DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2:
|
||||
case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2:
|
||||
case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN3:
|
||||
@ -9828,6 +9830,7 @@ static const struct usb_device_id rtl8152_table[] = {
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3054),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3062),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3069),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3082),
|
||||
|
@ -246,7 +246,8 @@ static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
|
||||
__be32 daddr, __be32 saddr,
|
||||
__be32 key, __u8 tos,
|
||||
struct net *net, int oif,
|
||||
__u32 mark, __u32 tun_inner_hash)
|
||||
__u32 mark, __u32 tun_inner_hash,
|
||||
__u8 flow_flags)
|
||||
{
|
||||
memset(fl4, 0, sizeof(*fl4));
|
||||
|
||||
@ -263,6 +264,7 @@ static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
|
||||
fl4->fl4_gre_key = key;
|
||||
fl4->flowi4_mark = mark;
|
||||
fl4->flowi4_multipath_hash = tun_inner_hash;
|
||||
fl4->flowi4_flags = flow_flags;
|
||||
}
|
||||
|
||||
int ip_tunnel_init(struct net_device *dev);
|
||||
|
@ -179,6 +179,8 @@ struct netns_ipv4 {
|
||||
unsigned int sysctl_tcp_fastopen_blackhole_timeout;
|
||||
atomic_t tfo_active_disable_times;
|
||||
unsigned long tfo_active_disable_stamp;
|
||||
u32 tcp_challenge_timestamp;
|
||||
u32 tcp_challenge_count;
|
||||
|
||||
int sysctl_udp_wmem_min;
|
||||
int sysctl_udp_rmem_min;
|
||||
|
@ -79,7 +79,7 @@ struct bpf_insn {
|
||||
/* Key of an a BPF_MAP_TYPE_LPM_TRIE entry */
|
||||
struct bpf_lpm_trie_key {
|
||||
__u32 prefixlen; /* up to 32 for AF_INET, 128 for AF_INET6 */
|
||||
__u8 data[]; /* Arbitrary size */
|
||||
__u8 data[0]; /* Arbitrary size */
|
||||
};
|
||||
|
||||
struct bpf_cgroup_storage_key {
|
||||
|
@ -56,7 +56,7 @@
|
||||
#define VIRTIO_NET_F_MQ 22 /* Device supports Receive Flow
|
||||
* Steering */
|
||||
#define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */
|
||||
#define VIRTIO_NET_F_NOTF_COAL 53 /* Guest can handle notifications coalescing */
|
||||
#define VIRTIO_NET_F_NOTF_COAL 53 /* Device supports notifications coalescing */
|
||||
#define VIRTIO_NET_F_HASH_REPORT 57 /* Supports hash report */
|
||||
#define VIRTIO_NET_F_RSS 60 /* Supports RSS RX steering */
|
||||
#define VIRTIO_NET_F_RSC_EXT 61 /* extended coalescing info */
|
||||
@ -364,24 +364,24 @@ struct virtio_net_hash_config {
|
||||
*/
|
||||
#define VIRTIO_NET_CTRL_NOTF_COAL 6
|
||||
/*
|
||||
* Set the tx-usecs/tx-max-packets patameters.
|
||||
* tx-usecs - Maximum number of usecs to delay a TX notification.
|
||||
* tx-max-packets - Maximum number of packets to send before a TX notification.
|
||||
* Set the tx-usecs/tx-max-packets parameters.
|
||||
*/
|
||||
struct virtio_net_ctrl_coal_tx {
|
||||
/* Maximum number of packets to send before a TX notification */
|
||||
__le32 tx_max_packets;
|
||||
/* Maximum number of usecs to delay a TX notification */
|
||||
__le32 tx_usecs;
|
||||
};
|
||||
|
||||
#define VIRTIO_NET_CTRL_NOTF_COAL_TX_SET 0
|
||||
|
||||
/*
|
||||
* Set the rx-usecs/rx-max-packets patameters.
|
||||
* rx-usecs - Maximum number of usecs to delay a RX notification.
|
||||
* rx-max-frames - Maximum number of packets to receive before a RX notification.
|
||||
* Set the rx-usecs/rx-max-packets parameters.
|
||||
*/
|
||||
struct virtio_net_ctrl_coal_rx {
|
||||
/* Maximum number of packets to receive before a RX notification */
|
||||
__le32 rx_max_packets;
|
||||
/* Maximum number of usecs to delay a RX notification */
|
||||
__le32 rx_usecs;
|
||||
};
|
||||
|
||||
|
@ -921,8 +921,10 @@ static void purge_effective_progs(struct cgroup *cgrp, struct bpf_prog *prog,
|
||||
pos++;
|
||||
}
|
||||
}
|
||||
|
||||
/* no link or prog match, skip the cgroup of this layer */
|
||||
continue;
|
||||
found:
|
||||
BUG_ON(!cg);
|
||||
progs = rcu_dereference_protected(
|
||||
desc->bpf.effective[atype],
|
||||
lockdep_is_held(&cgroup_mutex));
|
||||
|
@ -971,7 +971,7 @@ pure_initcall(bpf_jit_charge_init);
|
||||
|
||||
int bpf_jit_charge_modmem(u32 size)
|
||||
{
|
||||
if (atomic_long_add_return(size, &bpf_jit_current) > bpf_jit_limit) {
|
||||
if (atomic_long_add_return(size, &bpf_jit_current) > READ_ONCE(bpf_jit_limit)) {
|
||||
if (!bpf_capable()) {
|
||||
atomic_long_sub(size, &bpf_jit_current);
|
||||
return -EPERM;
|
||||
|
@ -5197,7 +5197,7 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
switch (func_id) {
|
||||
case BPF_FUNC_sys_bpf:
|
||||
return &bpf_sys_bpf_proto;
|
||||
return !perfmon_capable() ? NULL : &bpf_sys_bpf_proto;
|
||||
case BPF_FUNC_btf_find_by_name_kind:
|
||||
return &bpf_btf_find_by_name_kind_proto;
|
||||
case BPF_FUNC_sys_close:
|
||||
|
@ -6066,6 +6066,9 @@ skip_type_check:
|
||||
return -EACCES;
|
||||
}
|
||||
meta->mem_size = reg->var_off.value;
|
||||
err = mark_chain_precision(env, regno);
|
||||
if (err)
|
||||
return err;
|
||||
break;
|
||||
case ARG_PTR_TO_INT:
|
||||
case ARG_PTR_TO_LONG:
|
||||
@ -7030,8 +7033,7 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
|
||||
struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
|
||||
struct bpf_reg_state *regs = cur_regs(env), *reg;
|
||||
struct bpf_map *map = meta->map_ptr;
|
||||
struct tnum range;
|
||||
u64 val;
|
||||
u64 val, max;
|
||||
int err;
|
||||
|
||||
if (func_id != BPF_FUNC_tail_call)
|
||||
@ -7041,10 +7043,11 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
range = tnum_range(0, map->max_entries - 1);
|
||||
reg = ®s[BPF_REG_3];
|
||||
val = reg->var_off.value;
|
||||
max = map->max_entries;
|
||||
|
||||
if (!register_is_const(reg) || !tnum_in(range, reg->var_off)) {
|
||||
if (!(register_is_const(reg) && val < max)) {
|
||||
bpf_map_key_store(aux, BPF_MAP_KEY_POISON);
|
||||
return 0;
|
||||
}
|
||||
@ -7052,8 +7055,6 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
|
||||
err = mark_chain_precision(env, BPF_REG_3);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
val = reg->var_off.value;
|
||||
if (bpf_map_key_unseen(aux))
|
||||
bpf_map_key_store(aux, val);
|
||||
else if (!bpf_map_key_poisoned(aux) &&
|
||||
|
@ -4179,6 +4179,17 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, void *data,
|
||||
}
|
||||
}
|
||||
|
||||
if (i == ARRAY_SIZE(hci_cc_table)) {
|
||||
/* Unknown opcode, assume byte 0 contains the status, so
|
||||
* that e.g. __hci_cmd_sync() properly returns errors
|
||||
* for vendor specific commands send by HCI drivers.
|
||||
* If a vendor doesn't actually follow this convention we may
|
||||
* need to introduce a vendor CC table in order to properly set
|
||||
* the status.
|
||||
*/
|
||||
*status = skb->data[0];
|
||||
}
|
||||
|
||||
handle_cmd_cnt_and_timer(hdev, ev->ncmd);
|
||||
|
||||
hci_req_cmd_complete(hdev, *opcode, *status, req_complete,
|
||||
@ -5790,7 +5801,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
|
||||
*/
|
||||
hci_dev_clear_flag(hdev, HCI_LE_ADV);
|
||||
|
||||
conn = hci_lookup_le_connect(hdev);
|
||||
conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, bdaddr);
|
||||
if (!conn) {
|
||||
/* In case of error status and there is no connection pending
|
||||
* just unlock as there is nothing to cleanup.
|
||||
|
@ -4773,9 +4773,11 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason)
|
||||
/* Cleanup hci_conn object if it cannot be cancelled as it
|
||||
* likelly means the controller and host stack are out of sync.
|
||||
*/
|
||||
if (err)
|
||||
if (err) {
|
||||
hci_dev_lock(hdev);
|
||||
hci_conn_failed(conn, err);
|
||||
|
||||
hci_dev_unlock(hdev);
|
||||
}
|
||||
return err;
|
||||
case BT_CONNECT2:
|
||||
return hci_reject_conn_sync(hdev, conn, reason);
|
||||
@ -5288,17 +5290,21 @@ int hci_suspend_sync(struct hci_dev *hdev)
|
||||
/* Prevent disconnects from causing scanning to be re-enabled */
|
||||
hci_pause_scan_sync(hdev);
|
||||
|
||||
/* Soft disconnect everything (power off) */
|
||||
err = hci_disconnect_all_sync(hdev, HCI_ERROR_REMOTE_POWER_OFF);
|
||||
if (err) {
|
||||
/* Set state to BT_RUNNING so resume doesn't notify */
|
||||
hdev->suspend_state = BT_RUNNING;
|
||||
hci_resume_sync(hdev);
|
||||
return err;
|
||||
}
|
||||
if (hci_conn_count(hdev)) {
|
||||
/* Soft disconnect everything (power off) */
|
||||
err = hci_disconnect_all_sync(hdev, HCI_ERROR_REMOTE_POWER_OFF);
|
||||
if (err) {
|
||||
/* Set state to BT_RUNNING so resume doesn't notify */
|
||||
hdev->suspend_state = BT_RUNNING;
|
||||
hci_resume_sync(hdev);
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Update event mask so only the allowed event can wakeup the host */
|
||||
hci_set_event_mask_sync(hdev);
|
||||
/* Update event mask so only the allowed event can wakeup the
|
||||
* host.
|
||||
*/
|
||||
hci_set_event_mask_sync(hdev);
|
||||
}
|
||||
|
||||
/* Only configure accept list if disconnect succeeded and wake
|
||||
* isn't being prevented.
|
||||
|
@ -83,14 +83,14 @@ static void hidp_copy_session(struct hidp_session *session, struct hidp_conninfo
|
||||
ci->product = session->input->id.product;
|
||||
ci->version = session->input->id.version;
|
||||
if (session->input->name)
|
||||
strlcpy(ci->name, session->input->name, 128);
|
||||
strscpy(ci->name, session->input->name, 128);
|
||||
else
|
||||
strlcpy(ci->name, "HID Boot Device", 128);
|
||||
strscpy(ci->name, "HID Boot Device", 128);
|
||||
} else if (session->hid) {
|
||||
ci->vendor = session->hid->vendor;
|
||||
ci->product = session->hid->product;
|
||||
ci->version = session->hid->version;
|
||||
strlcpy(ci->name, session->hid->name, 128);
|
||||
strscpy(ci->name, session->hid->name, 128);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1309,7 +1309,7 @@ static int iso_sock_shutdown(struct socket *sock, int how)
|
||||
struct sock *sk = sock->sk;
|
||||
int err = 0;
|
||||
|
||||
BT_DBG("sock %p, sk %p", sock, sk);
|
||||
BT_DBG("sock %p, sk %p, how %d", sock, sk, how);
|
||||
|
||||
if (!sk)
|
||||
return 0;
|
||||
@ -1317,17 +1317,32 @@ static int iso_sock_shutdown(struct socket *sock, int how)
|
||||
sock_hold(sk);
|
||||
lock_sock(sk);
|
||||
|
||||
if (!sk->sk_shutdown) {
|
||||
sk->sk_shutdown = SHUTDOWN_MASK;
|
||||
iso_sock_clear_timer(sk);
|
||||
__iso_sock_close(sk);
|
||||
|
||||
if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime &&
|
||||
!(current->flags & PF_EXITING))
|
||||
err = bt_sock_wait_state(sk, BT_CLOSED,
|
||||
sk->sk_lingertime);
|
||||
switch (how) {
|
||||
case SHUT_RD:
|
||||
if (sk->sk_shutdown & RCV_SHUTDOWN)
|
||||
goto unlock;
|
||||
sk->sk_shutdown |= RCV_SHUTDOWN;
|
||||
break;
|
||||
case SHUT_WR:
|
||||
if (sk->sk_shutdown & SEND_SHUTDOWN)
|
||||
goto unlock;
|
||||
sk->sk_shutdown |= SEND_SHUTDOWN;
|
||||
break;
|
||||
case SHUT_RDWR:
|
||||
if (sk->sk_shutdown & SHUTDOWN_MASK)
|
||||
goto unlock;
|
||||
sk->sk_shutdown |= SHUTDOWN_MASK;
|
||||
break;
|
||||
}
|
||||
|
||||
iso_sock_clear_timer(sk);
|
||||
__iso_sock_close(sk);
|
||||
|
||||
if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime &&
|
||||
!(current->flags & PF_EXITING))
|
||||
err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime);
|
||||
|
||||
unlock:
|
||||
release_sock(sk);
|
||||
sock_put(sk);
|
||||
|
||||
|
@ -1992,11 +1992,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
|
||||
src_match = !bacmp(&c->src, src);
|
||||
dst_match = !bacmp(&c->dst, dst);
|
||||
if (src_match && dst_match) {
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c) {
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
}
|
||||
if (!l2cap_chan_hold_unless_zero(c))
|
||||
continue;
|
||||
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
}
|
||||
|
||||
/* Closest match */
|
||||
|
@ -4547,6 +4547,22 @@ static int set_exp_feature(struct sock *sk, struct hci_dev *hdev,
|
||||
MGMT_STATUS_NOT_SUPPORTED);
|
||||
}
|
||||
|
||||
static u32 get_params_flags(struct hci_dev *hdev,
|
||||
struct hci_conn_params *params)
|
||||
{
|
||||
u32 flags = hdev->conn_flags;
|
||||
|
||||
/* Devices using RPAs can only be programmed in the acceptlist if
|
||||
* LL Privacy has been enable otherwise they cannot mark
|
||||
* HCI_CONN_FLAG_REMOTE_WAKEUP.
|
||||
*/
|
||||
if ((flags & HCI_CONN_FLAG_REMOTE_WAKEUP) && !use_ll_privacy(hdev) &&
|
||||
hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type))
|
||||
flags &= ~HCI_CONN_FLAG_REMOTE_WAKEUP;
|
||||
|
||||
return flags;
|
||||
}
|
||||
|
||||
static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
|
||||
u16 data_len)
|
||||
{
|
||||
@ -4578,10 +4594,10 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
|
||||
} else {
|
||||
params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
|
||||
le_addr_type(cp->addr.type));
|
||||
|
||||
if (!params)
|
||||
goto done;
|
||||
|
||||
supported_flags = get_params_flags(hdev, params);
|
||||
current_flags = params->flags;
|
||||
}
|
||||
|
||||
@ -4649,38 +4665,35 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
|
||||
bt_dev_warn(hdev, "No such BR/EDR device %pMR (0x%x)",
|
||||
&cp->addr.bdaddr, cp->addr.type);
|
||||
}
|
||||
} else {
|
||||
params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
|
||||
le_addr_type(cp->addr.type));
|
||||
if (params) {
|
||||
/* Devices using RPAs can only be programmed in the
|
||||
* acceptlist LL Privacy has been enable otherwise they
|
||||
* cannot mark HCI_CONN_FLAG_REMOTE_WAKEUP.
|
||||
*/
|
||||
if ((current_flags & HCI_CONN_FLAG_REMOTE_WAKEUP) &&
|
||||
!use_ll_privacy(hdev) &&
|
||||
hci_find_irk_by_addr(hdev, ¶ms->addr,
|
||||
params->addr_type)) {
|
||||
bt_dev_warn(hdev,
|
||||
"Cannot set wakeable for RPA");
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
params->flags = current_flags;
|
||||
status = MGMT_STATUS_SUCCESS;
|
||||
|
||||
/* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY
|
||||
* has been set.
|
||||
*/
|
||||
if (params->flags & HCI_CONN_FLAG_DEVICE_PRIVACY)
|
||||
hci_update_passive_scan(hdev);
|
||||
} else {
|
||||
bt_dev_warn(hdev, "No such LE device %pMR (0x%x)",
|
||||
&cp->addr.bdaddr,
|
||||
le_addr_type(cp->addr.type));
|
||||
}
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
|
||||
le_addr_type(cp->addr.type));
|
||||
if (!params) {
|
||||
bt_dev_warn(hdev, "No such LE device %pMR (0x%x)",
|
||||
&cp->addr.bdaddr, le_addr_type(cp->addr.type));
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
supported_flags = get_params_flags(hdev, params);
|
||||
|
||||
if ((supported_flags | current_flags) != supported_flags) {
|
||||
bt_dev_warn(hdev, "Bad flag given (0x%x) vs supported (0x%0x)",
|
||||
current_flags, supported_flags);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
params->flags = current_flags;
|
||||
status = MGMT_STATUS_SUCCESS;
|
||||
|
||||
/* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY
|
||||
* has been set.
|
||||
*/
|
||||
if (params->flags & HCI_CONN_FLAG_DEVICE_PRIVACY)
|
||||
hci_update_passive_scan(hdev);
|
||||
|
||||
unlock:
|
||||
hci_dev_unlock(hdev);
|
||||
|
||||
@ -5054,7 +5067,6 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
|
||||
else
|
||||
status = MGMT_STATUS_FAILED;
|
||||
|
||||
mgmt_pending_remove(cmd);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
|
@ -461,7 +461,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
|
||||
|
||||
if (copied == len)
|
||||
break;
|
||||
} while (!sg_is_last(sge));
|
||||
} while ((i != msg_rx->sg.end) && !sg_is_last(sge));
|
||||
|
||||
if (unlikely(peek)) {
|
||||
msg_rx = sk_psock_next_msg(psock, msg_rx);
|
||||
@ -471,7 +471,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
|
||||
}
|
||||
|
||||
msg_rx->sg.start = i;
|
||||
if (!sge->length && sg_is_last(sge)) {
|
||||
if (!sge->length && (i == msg_rx->sg.end || sg_is_last(sge))) {
|
||||
msg_rx = sk_psock_dequeue_msg(psock);
|
||||
kfree_sk_msg(msg_rx);
|
||||
}
|
||||
|
@ -45,7 +45,7 @@ static struct sk_buff *hellcreek_rcv(struct sk_buff *skb,
|
||||
|
||||
skb->dev = dsa_master_find_slave(dev, 0, port);
|
||||
if (!skb->dev) {
|
||||
netdev_warn(dev, "Failed to get source port: %d\n", port);
|
||||
netdev_warn_once(dev, "Failed to get source port: %d\n", port);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -389,7 +389,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
|
||||
dev_match = dev_match || (res.type == RTN_LOCAL &&
|
||||
dev == net->loopback_dev);
|
||||
if (dev_match) {
|
||||
ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
|
||||
ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
|
||||
return ret;
|
||||
}
|
||||
if (no_addr)
|
||||
@ -401,7 +401,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
|
||||
ret = 0;
|
||||
if (fib_lookup(net, &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE) == 0) {
|
||||
if (res.type == RTN_UNICAST)
|
||||
ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_HOST;
|
||||
ret = FIB_RES_NHC(res)->nhc_scope >= RT_SCOPE_LINK;
|
||||
}
|
||||
return ret;
|
||||
|
||||
|
@ -609,7 +609,7 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
||||
ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
|
||||
tunnel_id_to_key32(key->tun_id),
|
||||
key->tos & ~INET_ECN_MASK, dev_net(dev), 0,
|
||||
skb->mark, skb_get_hash(skb));
|
||||
skb->mark, skb_get_hash(skb), key->flow_flags);
|
||||
rt = ip_route_output_key(dev_net(dev), &fl4);
|
||||
if (IS_ERR(rt))
|
||||
return PTR_ERR(rt);
|
||||
|
@ -295,7 +295,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
|
||||
ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr,
|
||||
iph->saddr, tunnel->parms.o_key,
|
||||
RT_TOS(iph->tos), dev_net(dev),
|
||||
tunnel->parms.link, tunnel->fwmark, 0);
|
||||
tunnel->parms.link, tunnel->fwmark, 0, 0);
|
||||
rt = ip_route_output_key(tunnel->net, &fl4);
|
||||
|
||||
if (!IS_ERR(rt)) {
|
||||
@ -570,7 +570,8 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
||||
}
|
||||
ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src,
|
||||
tunnel_id_to_key32(key->tun_id), RT_TOS(tos),
|
||||
dev_net(dev), 0, skb->mark, skb_get_hash(skb));
|
||||
dev_net(dev), 0, skb->mark, skb_get_hash(skb),
|
||||
key->flow_flags);
|
||||
if (tunnel->encap.type != TUNNEL_ENCAP_NONE)
|
||||
goto tx_error;
|
||||
|
||||
@ -729,7 +730,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
||||
ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr,
|
||||
tunnel->parms.o_key, RT_TOS(tos),
|
||||
dev_net(dev), tunnel->parms.link,
|
||||
tunnel->fwmark, skb_get_hash(skb));
|
||||
tunnel->fwmark, skb_get_hash(skb), 0);
|
||||
|
||||
if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0)
|
||||
goto tx_error;
|
||||
|
@ -3614,12 +3614,9 @@ bool tcp_oow_rate_limited(struct net *net, const struct sk_buff *skb,
|
||||
/* RFC 5961 7 [ACK Throttling] */
|
||||
static void tcp_send_challenge_ack(struct sock *sk)
|
||||
{
|
||||
/* unprotected vars, we dont care of overwrites */
|
||||
static u32 challenge_timestamp;
|
||||
static unsigned int challenge_count;
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
struct net *net = sock_net(sk);
|
||||
u32 count, now;
|
||||
u32 count, now, ack_limit;
|
||||
|
||||
/* First check our per-socket dupack rate limit. */
|
||||
if (__tcp_oow_rate_limited(net,
|
||||
@ -3627,18 +3624,22 @@ static void tcp_send_challenge_ack(struct sock *sk)
|
||||
&tp->last_oow_ack_time))
|
||||
return;
|
||||
|
||||
ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit);
|
||||
if (ack_limit == INT_MAX)
|
||||
goto send_ack;
|
||||
|
||||
/* Then check host-wide RFC 5961 rate limit. */
|
||||
now = jiffies / HZ;
|
||||
if (now != challenge_timestamp) {
|
||||
u32 ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit);
|
||||
if (now != READ_ONCE(net->ipv4.tcp_challenge_timestamp)) {
|
||||
u32 half = (ack_limit + 1) >> 1;
|
||||
|
||||
challenge_timestamp = now;
|
||||
WRITE_ONCE(challenge_count, half + prandom_u32_max(ack_limit));
|
||||
WRITE_ONCE(net->ipv4.tcp_challenge_timestamp, now);
|
||||
WRITE_ONCE(net->ipv4.tcp_challenge_count, half + prandom_u32_max(ack_limit));
|
||||
}
|
||||
count = READ_ONCE(challenge_count);
|
||||
count = READ_ONCE(net->ipv4.tcp_challenge_count);
|
||||
if (count > 0) {
|
||||
WRITE_ONCE(challenge_count, count - 1);
|
||||
WRITE_ONCE(net->ipv4.tcp_challenge_count, count - 1);
|
||||
send_ack:
|
||||
NET_INC_STATS(net, LINUX_MIB_TCPCHALLENGEACK);
|
||||
tcp_send_ack(sk);
|
||||
}
|
||||
|
@ -3139,8 +3139,10 @@ static int __net_init tcp_sk_init(struct net *net)
|
||||
net->ipv4.sysctl_tcp_tso_win_divisor = 3;
|
||||
/* Default TSQ limit of 16 TSO segments */
|
||||
net->ipv4.sysctl_tcp_limit_output_bytes = 16 * 65536;
|
||||
/* rfc5961 challenge ack rate limiting */
|
||||
net->ipv4.sysctl_tcp_challenge_ack_limit = 1000;
|
||||
|
||||
/* rfc5961 challenge ack rate limiting, per net-ns, disabled by default. */
|
||||
net->ipv4.sysctl_tcp_challenge_ack_limit = INT_MAX;
|
||||
|
||||
net->ipv4.sysctl_tcp_min_tso_segs = 2;
|
||||
net->ipv4.sysctl_tcp_tso_rtt_log = 9; /* 2^9 = 512 usec */
|
||||
net->ipv4.sysctl_tcp_min_rtt_wlen = 300;
|
||||
|
@ -1412,12 +1412,6 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
|
||||
psock->sk = csk;
|
||||
psock->bpf_prog = prog;
|
||||
|
||||
err = strp_init(&psock->strp, csk, &cb);
|
||||
if (err) {
|
||||
kmem_cache_free(kcm_psockp, psock);
|
||||
goto out;
|
||||
}
|
||||
|
||||
write_lock_bh(&csk->sk_callback_lock);
|
||||
|
||||
/* Check if sk_user_data is already by KCM or someone else.
|
||||
@ -1425,13 +1419,18 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
|
||||
*/
|
||||
if (csk->sk_user_data) {
|
||||
write_unlock_bh(&csk->sk_callback_lock);
|
||||
strp_stop(&psock->strp);
|
||||
strp_done(&psock->strp);
|
||||
kmem_cache_free(kcm_psockp, psock);
|
||||
err = -EALREADY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
err = strp_init(&psock->strp, csk, &cb);
|
||||
if (err) {
|
||||
write_unlock_bh(&csk->sk_callback_lock);
|
||||
kmem_cache_free(kcm_psockp, psock);
|
||||
goto out;
|
||||
}
|
||||
|
||||
psock->save_data_ready = csk->sk_data_ready;
|
||||
psock->save_write_space = csk->sk_write_space;
|
||||
psock->save_state_change = csk->sk_state_change;
|
||||
|
@ -530,6 +530,10 @@ int ieee80211_ibss_finish_csa(struct ieee80211_sub_if_data *sdata)
|
||||
|
||||
sdata_assert_lock(sdata);
|
||||
|
||||
/* When not connected/joined, sending CSA doesn't make sense. */
|
||||
if (ifibss->state != IEEE80211_IBSS_MLME_JOINED)
|
||||
return -ENOLINK;
|
||||
|
||||
/* update cfg80211 bss information with the new channel */
|
||||
if (!is_zero_ether_addr(ifibss->bssid)) {
|
||||
cbss = cfg80211_get_bss(sdata->local->hw.wiphy,
|
||||
|
@ -469,16 +469,19 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
|
||||
scan_req = rcu_dereference_protected(local->scan_req,
|
||||
lockdep_is_held(&local->mtx));
|
||||
|
||||
if (scan_req != local->int_scan_req) {
|
||||
local->scan_info.aborted = aborted;
|
||||
cfg80211_scan_done(scan_req, &local->scan_info);
|
||||
}
|
||||
RCU_INIT_POINTER(local->scan_req, NULL);
|
||||
RCU_INIT_POINTER(local->scan_sdata, NULL);
|
||||
|
||||
local->scanning = 0;
|
||||
local->scan_chandef.chan = NULL;
|
||||
|
||||
synchronize_rcu();
|
||||
|
||||
if (scan_req != local->int_scan_req) {
|
||||
local->scan_info.aborted = aborted;
|
||||
cfg80211_scan_done(scan_req, &local->scan_info);
|
||||
}
|
||||
|
||||
/* Set power back to normal operating levels. */
|
||||
ieee80211_hw_config(local, 0);
|
||||
|
||||
|
@ -494,7 +494,7 @@ __sta_info_alloc(struct ieee80211_sub_if_data *sdata,
|
||||
sta->sdata = sdata;
|
||||
|
||||
if (sta_info_alloc_link(local, &sta->deflink, gfp))
|
||||
return NULL;
|
||||
goto free;
|
||||
|
||||
if (link_id >= 0) {
|
||||
sta_info_add_link(sta, link_id, &sta->deflink,
|
||||
@ -2316,9 +2316,9 @@ static inline u64 sta_get_tidstats_msdu(struct ieee80211_sta_rx_stats *rxstats,
|
||||
u64 value;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&rxstats->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&rxstats->syncp);
|
||||
value = rxstats->msdu[tid];
|
||||
} while (u64_stats_fetch_retry(&rxstats->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
|
||||
|
||||
return value;
|
||||
}
|
||||
@ -2384,9 +2384,9 @@ static inline u64 sta_get_stats_bytes(struct ieee80211_sta_rx_stats *rxstats)
|
||||
u64 value;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&rxstats->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&rxstats->syncp);
|
||||
value = rxstats->bytes;
|
||||
} while (u64_stats_fetch_retry(&rxstats->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&rxstats->syncp, start));
|
||||
|
||||
return value;
|
||||
}
|
||||
|
@ -5885,6 +5885,7 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
|
||||
rcu_read_lock();
|
||||
err = ieee80211_lookup_ra_sta(sdata, skb, &sta);
|
||||
if (err) {
|
||||
dev_kfree_skb(skb);
|
||||
rcu_read_unlock();
|
||||
return err;
|
||||
}
|
||||
@ -5899,7 +5900,7 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
|
||||
* for MLO STA, the SA should be the AP MLD address, but
|
||||
* the link ID has been selected already
|
||||
*/
|
||||
if (sta->sta.mlo)
|
||||
if (sta && sta->sta.mlo)
|
||||
memcpy(ehdr->h_source, sdata->vif.addr, ETH_ALEN);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
@ -44,7 +44,7 @@ ieee802154_subif_frame(struct ieee802154_sub_if_data *sdata,
|
||||
|
||||
switch (mac_cb(skb)->dest.mode) {
|
||||
case IEEE802154_ADDR_NONE:
|
||||
if (mac_cb(skb)->dest.mode != IEEE802154_ADDR_NONE)
|
||||
if (hdr->source.mode != IEEE802154_ADDR_NONE)
|
||||
/* FIXME: check if we are PAN coordinator */
|
||||
skb->pkt_type = PACKET_OTHERHOST;
|
||||
else
|
||||
|
@ -1079,9 +1079,9 @@ static void mpls_get_stats(struct mpls_dev *mdev,
|
||||
|
||||
p = per_cpu_ptr(mdev->stats, i);
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&p->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&p->syncp);
|
||||
local = p->stats;
|
||||
} while (u64_stats_fetch_retry(&p->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&p->syncp, start));
|
||||
|
||||
stats->rx_packets += local.rx_packets;
|
||||
stats->rx_bytes += local.rx_bytes;
|
||||
|
@ -1802,7 +1802,7 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
|
||||
ovs_dp_reset_user_features(skb, info);
|
||||
}
|
||||
|
||||
goto err_unlock_and_destroy_meters;
|
||||
goto err_destroy_portids;
|
||||
}
|
||||
|
||||
err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid,
|
||||
@ -1817,6 +1817,8 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
|
||||
ovs_notify(&dp_datapath_genl_family, reply, info);
|
||||
return 0;
|
||||
|
||||
err_destroy_portids:
|
||||
kfree(rcu_dereference_raw(dp->upcall_portids));
|
||||
err_unlock_and_destroy_meters:
|
||||
ovs_unlock();
|
||||
ovs_meters_exit(dp);
|
||||
|
@ -1122,6 +1122,21 @@ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
|
||||
}
|
||||
EXPORT_SYMBOL(dev_graft_qdisc);
|
||||
|
||||
static void shutdown_scheduler_queue(struct net_device *dev,
|
||||
struct netdev_queue *dev_queue,
|
||||
void *_qdisc_default)
|
||||
{
|
||||
struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
|
||||
struct Qdisc *qdisc_default = _qdisc_default;
|
||||
|
||||
if (qdisc) {
|
||||
rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
|
||||
dev_queue->qdisc_sleeping = qdisc_default;
|
||||
|
||||
qdisc_put(qdisc);
|
||||
}
|
||||
}
|
||||
|
||||
static void attach_one_default_qdisc(struct net_device *dev,
|
||||
struct netdev_queue *dev_queue,
|
||||
void *_unused)
|
||||
@ -1169,6 +1184,7 @@ static void attach_default_qdiscs(struct net_device *dev)
|
||||
if (qdisc == &noop_qdisc) {
|
||||
netdev_warn(dev, "default qdisc (%s) fail, fallback to %s\n",
|
||||
default_qdisc_ops->id, noqueue_qdisc_ops.id);
|
||||
netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
|
||||
dev->priv_flags |= IFF_NO_QUEUE;
|
||||
netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
|
||||
qdisc = txq->qdisc_sleeping;
|
||||
@ -1447,21 +1463,6 @@ void dev_init_scheduler(struct net_device *dev)
|
||||
timer_setup(&dev->watchdog_timer, dev_watchdog, 0);
|
||||
}
|
||||
|
||||
static void shutdown_scheduler_queue(struct net_device *dev,
|
||||
struct netdev_queue *dev_queue,
|
||||
void *_qdisc_default)
|
||||
{
|
||||
struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
|
||||
struct Qdisc *qdisc_default = _qdisc_default;
|
||||
|
||||
if (qdisc) {
|
||||
rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
|
||||
dev_queue->qdisc_sleeping = qdisc_default;
|
||||
|
||||
qdisc_put(qdisc);
|
||||
}
|
||||
}
|
||||
|
||||
void dev_shutdown(struct net_device *dev)
|
||||
{
|
||||
netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
|
||||
|
@ -356,6 +356,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct nlattr *tb[TCA_TBF_MAX + 1];
|
||||
struct tc_tbf_qopt *qopt;
|
||||
struct Qdisc *child = NULL;
|
||||
struct Qdisc *old = NULL;
|
||||
struct psched_ratecfg rate;
|
||||
struct psched_ratecfg peak;
|
||||
u64 max_size;
|
||||
@ -447,7 +448,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
sch_tree_lock(sch);
|
||||
if (child) {
|
||||
qdisc_tree_flush_backlog(q->qdisc);
|
||||
qdisc_put(q->qdisc);
|
||||
old = q->qdisc;
|
||||
q->qdisc = child;
|
||||
}
|
||||
q->limit = qopt->limit;
|
||||
@ -467,6 +468,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg));
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
qdisc_put(old);
|
||||
err = 0;
|
||||
|
||||
tbf_offload_change(sch);
|
||||
|
@ -1855,7 +1855,6 @@ static void smc_listen_out_connected(struct smc_sock *new_smc)
|
||||
{
|
||||
struct sock *newsmcsk = &new_smc->sk;
|
||||
|
||||
sk_refcnt_debug_inc(newsmcsk);
|
||||
if (newsmcsk->sk_state == SMC_INIT)
|
||||
newsmcsk->sk_state = SMC_ACTIVE;
|
||||
|
||||
|
@ -65,9 +65,10 @@ static ssize_t ht40allow_map_read(struct file *file,
|
||||
{
|
||||
struct wiphy *wiphy = file->private_data;
|
||||
char *buf;
|
||||
unsigned int offset = 0, buf_size = PAGE_SIZE, i, r;
|
||||
unsigned int offset = 0, buf_size = PAGE_SIZE, i;
|
||||
enum nl80211_band band;
|
||||
struct ieee80211_supported_band *sband;
|
||||
ssize_t r;
|
||||
|
||||
buf = kzalloc(buf_size, GFP_KERNEL);
|
||||
if (!buf)
|
||||
|
@ -379,6 +379,16 @@ static void xp_check_dma_contiguity(struct xsk_dma_map *dma_map)
|
||||
|
||||
static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_map)
|
||||
{
|
||||
if (!pool->unaligned) {
|
||||
u32 i;
|
||||
|
||||
for (i = 0; i < pool->heads_cnt; i++) {
|
||||
struct xdp_buff_xsk *xskb = &pool->heads[i];
|
||||
|
||||
xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
|
||||
}
|
||||
}
|
||||
|
||||
pool->dma_pages = kvcalloc(dma_map->dma_pages_cnt, sizeof(*pool->dma_pages), GFP_KERNEL);
|
||||
if (!pool->dma_pages)
|
||||
return -ENOMEM;
|
||||
@ -428,12 +438,6 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
|
||||
|
||||
if (pool->unaligned)
|
||||
xp_check_dma_contiguity(dma_map);
|
||||
else
|
||||
for (i = 0; i < pool->heads_cnt; i++) {
|
||||
struct xdp_buff_xsk *xskb = &pool->heads[i];
|
||||
|
||||
xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
|
||||
}
|
||||
|
||||
err = xp_init_dma_info(pool, dma_map);
|
||||
if (err) {
|
||||
|
@ -65,3 +65,4 @@ send_signal # intermittently fails to receive signa
|
||||
select_reuseport # intermittently fails on new s390x setup
|
||||
xdp_synproxy # JIT does not support calling kernel function (kfunc)
|
||||
unpriv_bpf_disabled # fentry
|
||||
lru_bug # prog 'printk': failed to auto-attach: -524
|
||||
|
@ -192,3 +192,28 @@
|
||||
.result = VERBOSE_ACCEPT,
|
||||
.retval = -1,
|
||||
},
|
||||
{
|
||||
"precise: mark_chain_precision for ARG_CONST_ALLOC_SIZE_OR_ZERO",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, ingress_ifindex)),
|
||||
BPF_LD_MAP_FD(BPF_REG_6, 0),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0x1000),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 42),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_ringbuf = { 1 },
|
||||
.prog_type = BPF_PROG_TYPE_XDP,
|
||||
.flags = BPF_F_TEST_STATE_FREQ,
|
||||
.errstr = "invalid access to memory, mem_size=1 off=42 size=8",
|
||||
.result = REJECT,
|
||||
},
|
||||
|
52
tools/testing/selftests/net/.gitignore
vendored
52
tools/testing/selftests/net/.gitignore
vendored
@ -1,42 +1,42 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
cmsg_sender
|
||||
fin_ack_lat
|
||||
gro
|
||||
hwtstamp_config
|
||||
ioam6_parser
|
||||
ip_defrag
|
||||
ipsec
|
||||
ipv6_flowlabel
|
||||
ipv6_flowlabel_mgr
|
||||
msg_zerocopy
|
||||
socket
|
||||
nettest
|
||||
psock_fanout
|
||||
psock_snd
|
||||
psock_tpacket
|
||||
stress_reuseport_listen
|
||||
reuseaddr_conflict
|
||||
reuseaddr_ports_exhausted
|
||||
reuseport_addr_any
|
||||
reuseport_bpf
|
||||
reuseport_bpf_cpu
|
||||
reuseport_bpf_numa
|
||||
reuseport_dualstack
|
||||
reuseaddr_conflict
|
||||
rxtimestamp
|
||||
socket
|
||||
so_netns_cookie
|
||||
so_txtime
|
||||
stress_reuseport_listen
|
||||
tap
|
||||
tcp_fastopen_backup_key
|
||||
tcp_inq
|
||||
tcp_mmap
|
||||
test_unix_oob
|
||||
timestamping
|
||||
tls
|
||||
toeplitz
|
||||
tun
|
||||
txring_overwrite
|
||||
txtimestamp
|
||||
udpgso
|
||||
udpgso_bench_rx
|
||||
udpgso_bench_tx
|
||||
tcp_inq
|
||||
tls
|
||||
txring_overwrite
|
||||
ip_defrag
|
||||
ipv6_flowlabel
|
||||
ipv6_flowlabel_mgr
|
||||
so_txtime
|
||||
tcp_fastopen_backup_key
|
||||
nettest
|
||||
fin_ack_lat
|
||||
reuseaddr_ports_exhausted
|
||||
hwtstamp_config
|
||||
rxtimestamp
|
||||
timestamping
|
||||
txtimestamp
|
||||
so_netns_cookie
|
||||
test_unix_oob
|
||||
gro
|
||||
ioam6_parser
|
||||
toeplitz
|
||||
tun
|
||||
cmsg_sender
|
||||
unix_connect
|
||||
tap
|
Loading…
Reference in New Issue
Block a user