Networking fixes for 5.10-rc6, including fixes from the WiFi driver,

and can subtrees.
 
 Current release - regressions:
 
  - gro_cells: reduce number of synchronize_net() calls
 
  - ch_ktls: release a lock before jumping to an error path
 
 Current release - always broken:
 
  - tcp: Allow full IP tos/IPv6 tclass to be reflected in L3 header
 
 Previous release - regressions:
 
  - net/tls: fix missing received data after fast remote close
 
  - vsock/virtio: discard packets only when socket is really closed
 
  - sock: set sk_err to ee_errno on dequeue from errq
 
  - cxgb4: fix the panic caused by non smac rewrite
 
 Previous release - always broken:
 
  - tcp: fix corner cases around setting ECN with BPF selection
         of congestion control
 
  - tcp: fix race condition when creating child sockets from
         syncookies on loopback interface
 
  - usbnet: ipheth: fix connectivity with iOS 14
 
  - tun: honor IOCB_NOWAIT flag
 
  - net/packet: fix packet receive on L3 devices without visible
                hard header
 
  - devlink: Make sure devlink instance and port are in same net
             namespace
 
  - net: openvswitch: fix TTL decrement action netlink message format
 
  - bonding: wait for sysfs kobject destruction before freeing
             struct slave
 
  - net: stmmac: fix upstream patch applied to the wrong context
 
  - bnxt_en: fix return value and unwind in probe error paths
 
 Misc:
 
  - devlink: add extra layer of categorization to the reload stats
             uAPI before it's released
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAl/BWHkACgkQMUZtbf5S
 IruVBQ/+KnCXbgYxDGEcKXhQf4wb0n6RQS899aUshfw4XIr63Qbj2r2EnShF7sDm
 vCYsl0DmmNh9UYvt38jMoUi8yCrqzIzofEuS1lE+kREE5+mKeJIOn55z61V+pWNJ
 KaYnqixCwx8o78Fk0BqMENrzMluVMblcQ73OkigrGO0nXjcmG/a5ZFr1EAPpfovs
 GJ6kyZZxV5E0+vJqm05DYebwvqi4/1VoHJx98c4nm7egF7j3K5x6UhNJV82wfqJ4
 OIZsaTpT98GuPC7/wCFs7EkdEgSsDYLqo/hiPI9Yg/DRv6aQXtmTwNkkaTnxPsKl
 FscLOZkslDca1sW9kwClvuXO4JsV+mDoz8G1iRhe1tBTkvL8NHGm/QpOkThQBG4S
 Rf5n4Lv416U7sONEdEV6Rq9ZWAa/hFXr9A3XO8tHZd2LvtchXYgFoUxHp8Aap17R
 ICyBcXKvDSEsV8NGTRe8cbgFxaPiF1iSvYPmuLf9ANgbNJo7WYPNxld8Z7lczqzM
 ReO+kHj71FyY5tbcMuOu15iJp710UYTvFo+e5fMQR9K80Pha98CWVh7hm2vuLf9q
 FQ2v/l8kVev4v42lULXdVq/sbHDUPC+WPjumhL2LSZrUhgwMGKcI5V1KK0CEdprK
 vjqkTyeKv9+nVxnXBVZO1nJNNK0F+zlNqbZtNnOiQAJK22HLDts=
 =1tQm
 -----END PGP SIGNATURE-----

Merge tag 'net-5.10-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Networking fixes for 5.10-rc6, including fixes from the WiFi driver,
  and CAN subtrees.

  Current release - regressions:

   - gro_cells: reduce number of synchronize_net() calls

   - ch_ktls: release a lock before jumping to an error path

  Current release - always broken:

   - tcp: Allow full IP tos/IPv6 tclass to be reflected in L3 header

  Previous release - regressions:

   - net/tls: fix missing received data after fast remote close

   - vsock/virtio: discard packets only when socket is really closed

   - sock: set sk_err to ee_errno on dequeue from errq

   - cxgb4: fix the panic caused by non smac rewrite

  Previous release - always broken:

   - tcp: fix corner cases around setting ECN with BPF selection of
     congestion control

   - tcp: fix race condition when creating child sockets from syncookies
     on loopback interface

   - usbnet: ipheth: fix connectivity with iOS 14

   - tun: honor IOCB_NOWAIT flag

   - net/packet: fix packet receive on L3 devices without visible hard
     header

   - devlink: Make sure devlink instance and port are in same net
     namespace

   - net: openvswitch: fix TTL decrement action netlink message format

   - bonding: wait for sysfs kobject destruction before freeing struct
     slave

   - net: stmmac: fix upstream patch applied to the wrong context

   - bnxt_en: fix return value and unwind in probe error paths

  Misc:

   - devlink: add extra layer of categorization to the reload stats uAPI
     before it's released"

* tag 'net-5.10-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (68 commits)
  sock: set sk_err to ee_errno on dequeue from errq
  mptcp: fix NULL ptr dereference on bad MPJ
  net: openvswitch: fix TTL decrement action netlink message format
  can: af_can: can_rx_unregister(): remove WARN() statement from list operation sanity check
  can: m_can: m_can_dev_setup(): add support for bosch mcan version 3.3.0
  can: m_can: fix nominal bitiming tseg2 min for version >= 3.1
  can: m_can: m_can_open(): remove IRQF_TRIGGER_FALLING from request_threaded_irq()'s flags
  can: mcp251xfd: mcp251xfd_probe(): bail out if no IRQ was given
  can: gs_usb: fix endianess problem with candleLight firmware
  ch_ktls: lock is not freed
  net/tls: Protect from calling tls_dev_del for TLS RX twice
  devlink: Make sure devlink instance and port are in same net namespace
  devlink: Hold rtnl lock while reading netdev attributes
  ptp: clockmatrix: bug fix for idtcm_strverscmp
  enetc: Let the hardware auto-advance the taprio base-time of 0
  gro_cells: reduce number of synchronize_net() calls
  net: stmmac: fix incorrect merge of patch upstream
  ipv6: addrlabel: fix possible memory leak in ip6addrlbl_net_init
  Documentation: netdev-FAQ: suggest how to post co-dependent series
  ibmvnic: enhance resetting status check during module exit
  ...
This commit is contained in:
Linus Torvalds 2020-11-27 14:38:02 -08:00
commit 79c0c1f038
66 changed files with 872 additions and 515 deletions

View File

@ -254,6 +254,32 @@ you will have done run-time testing specific to your change, but at a
minimum, your changes should survive an ``allyesconfig`` and an
``allmodconfig`` build without new warnings or failures.
Q: How do I post corresponding changes to user space components?
----------------------------------------------------------------
A: User space code exercising kernel features should be posted
alongside kernel patches. This gives reviewers a chance to see
how any new interface is used and how well it works.
When user space tools reside in the kernel repo itself all changes
should generally come as one series. If series becomes too large
or the user space project is not reviewed on netdev include a link
to a public repo where user space patches can be seen.
In case user space tooling lives in a separate repository but is
reviewed on netdev (e.g. patches to `iproute2` tools) kernel and
user space patches should form separate series (threads) when posted
to the mailing list, e.g.::
[PATCH net-next 0/3] net: some feature cover letter
└─ [PATCH net-next 1/3] net: some feature prep
└─ [PATCH net-next 2/3] net: some feature do it
└─ [PATCH net-next 3/3] selftest: net: some feature
[PATCH iproute2-next] ip: add support for some feature
Posting as one thread is discouraged because it confuses patchwork
(as of patchwork 2.2.2).
Q: Any other tips to help ensure my net/net-next patch gets OK'd?
-----------------------------------------------------------------
A: Attention to detail. Re-read your own work as if you were the

View File

@ -3528,11 +3528,12 @@ BROADCOM BRCM80211 IEEE802.11n WIRELESS DRIVER
M: Arend van Spriel <arend.vanspriel@broadcom.com>
M: Franky Lin <franky.lin@broadcom.com>
M: Hante Meuleman <hante.meuleman@broadcom.com>
M: Chi-Hsien Lin <chi-hsien.lin@cypress.com>
M: Wright Feng <wright.feng@cypress.com>
M: Chi-hsien Lin <chi-hsien.lin@infineon.com>
M: Wright Feng <wright.feng@infineon.com>
M: Chung-hsien Hsu <chung-hsien.hsu@infineon.com>
L: linux-wireless@vger.kernel.org
L: brcm80211-dev-list.pdl@broadcom.com
L: brcm80211-dev-list@cypress.com
L: SHA-cyfmac-dev-list@infineon.com
S: Supported
F: drivers/net/wireless/broadcom/brcm80211/
@ -13162,7 +13163,9 @@ M: Jesper Dangaard Brouer <hawk@kernel.org>
M: Ilias Apalodimas <ilias.apalodimas@linaro.org>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/page_pool.rst
F: include/net/page_pool.h
F: include/trace/events/page_pool.h
F: net/core/page_pool.c
PANASONIC LAPTOP ACPI EXTRAS DRIVER
@ -14804,7 +14807,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.g
F: drivers/net/wireless/realtek/rtlwifi/
REALTEK WIRELESS DRIVER (rtw88)
M: Yan-Hsuan Chuang <yhchuang@realtek.com>
M: Yan-Hsuan Chuang <tony0620emma@gmail.com>
L: linux-wireless@vger.kernel.org
S: Maintained
F: drivers/net/wireless/realtek/rtw88/
@ -15777,9 +15780,8 @@ F: drivers/slimbus/
F: include/linux/slimbus.h
SFC NETWORK DRIVER
M: Solarflare linux maintainers <linux-net-drivers@solarflare.com>
M: Edward Cree <ecree@solarflare.com>
M: Martin Habets <mhabets@solarflare.com>
M: Edward Cree <ecree.xilinx@gmail.com>
M: Martin Habets <habetsm.xilinx@gmail.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/sfc/

View File

@ -1460,29 +1460,9 @@ static void bond_upper_dev_unlink(struct bonding *bond, struct slave *slave)
slave->dev->flags &= ~IFF_SLAVE;
}
static struct slave *bond_alloc_slave(struct bonding *bond)
{
struct slave *slave = NULL;
slave = kzalloc(sizeof(*slave), GFP_KERNEL);
if (!slave)
return NULL;
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
SLAVE_AD_INFO(slave) = kzalloc(sizeof(struct ad_slave_info),
GFP_KERNEL);
if (!SLAVE_AD_INFO(slave)) {
kfree(slave);
return NULL;
}
}
INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
return slave;
}
static void bond_free_slave(struct slave *slave)
static void slave_kobj_release(struct kobject *kobj)
{
struct slave *slave = to_slave(kobj);
struct bonding *bond = bond_get_bond_by_slave(slave);
cancel_delayed_work_sync(&slave->notify_work);
@ -1492,6 +1472,53 @@ static void bond_free_slave(struct slave *slave)
kfree(slave);
}
static struct kobj_type slave_ktype = {
.release = slave_kobj_release,
#ifdef CONFIG_SYSFS
.sysfs_ops = &slave_sysfs_ops,
#endif
};
static int bond_kobj_init(struct slave *slave)
{
int err;
err = kobject_init_and_add(&slave->kobj, &slave_ktype,
&(slave->dev->dev.kobj), "bonding_slave");
if (err)
kobject_put(&slave->kobj);
return err;
}
static struct slave *bond_alloc_slave(struct bonding *bond,
struct net_device *slave_dev)
{
struct slave *slave = NULL;
slave = kzalloc(sizeof(*slave), GFP_KERNEL);
if (!slave)
return NULL;
slave->bond = bond;
slave->dev = slave_dev;
if (bond_kobj_init(slave))
return NULL;
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
SLAVE_AD_INFO(slave) = kzalloc(sizeof(struct ad_slave_info),
GFP_KERNEL);
if (!SLAVE_AD_INFO(slave)) {
kobject_put(&slave->kobj);
return NULL;
}
}
INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
return slave;
}
static void bond_fill_ifbond(struct bonding *bond, struct ifbond *info)
{
info->bond_mode = BOND_MODE(bond);
@ -1678,14 +1705,12 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
goto err_undo_flags;
}
new_slave = bond_alloc_slave(bond);
new_slave = bond_alloc_slave(bond, slave_dev);
if (!new_slave) {
res = -ENOMEM;
goto err_undo_flags;
}
new_slave->bond = bond;
new_slave->dev = slave_dev;
/* Set the new_slave's queue_id to be zero. Queue ID mapping
* is set via sysfs or module option if desired.
*/
@ -2007,7 +2032,7 @@ err_restore_mtu:
dev_set_mtu(slave_dev, new_slave->original_mtu);
err_free:
bond_free_slave(new_slave);
kobject_put(&new_slave->kobj);
err_undo_flags:
/* Enslave of first slave has failed and we need to fix master's mac */
@ -2187,7 +2212,7 @@ static int __bond_release_one(struct net_device *bond_dev,
if (!netif_is_bond_master(slave_dev))
slave_dev->priv_flags &= ~IFF_BONDING;
bond_free_slave(slave);
kobject_put(&slave->kobj);
return 0;
}

View File

@ -121,7 +121,6 @@ static const struct slave_attribute *slave_attrs[] = {
};
#define to_slave_attr(_at) container_of(_at, struct slave_attribute, attr)
#define to_slave(obj) container_of(obj, struct slave, kobj)
static ssize_t slave_show(struct kobject *kobj,
struct attribute *attr, char *buf)
@ -132,28 +131,15 @@ static ssize_t slave_show(struct kobject *kobj,
return slave_attr->show(slave, buf);
}
static const struct sysfs_ops slave_sysfs_ops = {
const struct sysfs_ops slave_sysfs_ops = {
.show = slave_show,
};
static struct kobj_type slave_ktype = {
#ifdef CONFIG_SYSFS
.sysfs_ops = &slave_sysfs_ops,
#endif
};
int bond_sysfs_slave_add(struct slave *slave)
{
const struct slave_attribute **a;
int err;
err = kobject_init_and_add(&slave->kobj, &slave_ktype,
&(slave->dev->dev.kobj), "bonding_slave");
if (err) {
kobject_put(&slave->kobj);
return err;
}
for (a = slave_attrs; *a; ++a) {
err = sysfs_create_file(&slave->kobj, &((*a)->attr));
if (err) {
@ -171,6 +157,4 @@ void bond_sysfs_slave_del(struct slave *slave)
for (a = slave_attrs; *a; ++a)
sysfs_remove_file(&slave->kobj, &((*a)->attr));
kobject_put(&slave->kobj);
}

View File

@ -1033,7 +1033,7 @@ static const struct can_bittiming_const m_can_bittiming_const_31X = {
.name = KBUILD_MODNAME,
.tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
.tseg1_max = 256,
.tseg2_min = 1, /* Time segment 2 = phase_seg2 */
.tseg2_min = 2, /* Time segment 2 = phase_seg2 */
.tseg2_max = 128,
.sjw_max = 128,
.brp_min = 1,
@ -1385,6 +1385,8 @@ static int m_can_dev_setup(struct m_can_classdev *m_can_dev)
&m_can_data_bittiming_const_31X;
break;
case 32:
case 33:
/* Support both MCAN version v3.2.x and v3.3.0 */
m_can_dev->can.bittiming_const = m_can_dev->bit_timing ?
m_can_dev->bit_timing : &m_can_bittiming_const_31X;
@ -1653,7 +1655,7 @@ static int m_can_open(struct net_device *dev)
INIT_WORK(&cdev->tx_work, m_can_tx_work_queue);
err = request_threaded_irq(dev->irq, NULL, m_can_isr,
IRQF_ONESHOT | IRQF_TRIGGER_FALLING,
IRQF_ONESHOT,
dev->name, dev);
} else {
err = request_irq(dev->irq, m_can_isr, IRQF_SHARED, dev->name,

View File

@ -2738,6 +2738,10 @@ static int mcp251xfd_probe(struct spi_device *spi)
u32 freq;
int err;
if (!spi->irq)
return dev_err_probe(&spi->dev, -ENXIO,
"No IRQ specified (maybe node \"interrupts-extended\" in DT missing)!\n");
rx_int = devm_gpiod_get_optional(&spi->dev, "microchip,rx-int",
GPIOD_IN);
if (PTR_ERR(rx_int) == -EPROBE_DEFER)

View File

@ -63,21 +63,27 @@ enum gs_can_identify_mode {
};
/* data types passed between host and device */
struct gs_host_config {
u32 byte_order;
} __packed;
/* All data exchanged between host and device is exchanged in host byte order,
* thanks to the struct gs_host_config byte_order member, which is sent first
* to indicate the desired byte order.
/* The firmware on the original USB2CAN by Geschwister Schneider
* Technologie Entwicklungs- und Vertriebs UG exchanges all data
* between the host and the device in host byte order. This is done
* with the struct gs_host_config::byte_order member, which is sent
* first to indicate the desired byte order.
*
* The widely used open source firmware candleLight doesn't support
* this feature and exchanges the data in little endian byte order.
*/
struct gs_host_config {
__le32 byte_order;
} __packed;
struct gs_device_config {
u8 reserved1;
u8 reserved2;
u8 reserved3;
u8 icount;
u32 sw_version;
u32 hw_version;
__le32 sw_version;
__le32 hw_version;
} __packed;
#define GS_CAN_MODE_NORMAL 0
@ -87,26 +93,26 @@ struct gs_device_config {
#define GS_CAN_MODE_ONE_SHOT BIT(3)
struct gs_device_mode {
u32 mode;
u32 flags;
__le32 mode;
__le32 flags;
} __packed;
struct gs_device_state {
u32 state;
u32 rxerr;
u32 txerr;
__le32 state;
__le32 rxerr;
__le32 txerr;
} __packed;
struct gs_device_bittiming {
u32 prop_seg;
u32 phase_seg1;
u32 phase_seg2;
u32 sjw;
u32 brp;
__le32 prop_seg;
__le32 phase_seg1;
__le32 phase_seg2;
__le32 sjw;
__le32 brp;
} __packed;
struct gs_identify_mode {
u32 mode;
__le32 mode;
} __packed;
#define GS_CAN_FEATURE_LISTEN_ONLY BIT(0)
@ -117,23 +123,23 @@ struct gs_identify_mode {
#define GS_CAN_FEATURE_IDENTIFY BIT(5)
struct gs_device_bt_const {
u32 feature;
u32 fclk_can;
u32 tseg1_min;
u32 tseg1_max;
u32 tseg2_min;
u32 tseg2_max;
u32 sjw_max;
u32 brp_min;
u32 brp_max;
u32 brp_inc;
__le32 feature;
__le32 fclk_can;
__le32 tseg1_min;
__le32 tseg1_max;
__le32 tseg2_min;
__le32 tseg2_max;
__le32 sjw_max;
__le32 brp_min;
__le32 brp_max;
__le32 brp_inc;
} __packed;
#define GS_CAN_FLAG_OVERFLOW 1
struct gs_host_frame {
u32 echo_id;
u32 can_id;
__le32 can_id;
u8 can_dlc;
u8 channel;
@ -329,13 +335,13 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
if (!skb)
return;
cf->can_id = hf->can_id;
cf->can_id = le32_to_cpu(hf->can_id);
cf->can_dlc = get_can_dlc(hf->can_dlc);
memcpy(cf->data, hf->data, 8);
/* ERROR frames tell us information about the controller */
if (hf->can_id & CAN_ERR_FLAG)
if (le32_to_cpu(hf->can_id) & CAN_ERR_FLAG)
gs_update_state(dev, cf);
netdev->stats.rx_packets++;
@ -418,11 +424,11 @@ static int gs_usb_set_bittiming(struct net_device *netdev)
if (!dbt)
return -ENOMEM;
dbt->prop_seg = bt->prop_seg;
dbt->phase_seg1 = bt->phase_seg1;
dbt->phase_seg2 = bt->phase_seg2;
dbt->sjw = bt->sjw;
dbt->brp = bt->brp;
dbt->prop_seg = cpu_to_le32(bt->prop_seg);
dbt->phase_seg1 = cpu_to_le32(bt->phase_seg1);
dbt->phase_seg2 = cpu_to_le32(bt->phase_seg2);
dbt->sjw = cpu_to_le32(bt->sjw);
dbt->brp = cpu_to_le32(bt->brp);
/* request bit timings */
rc = usb_control_msg(interface_to_usbdev(intf),
@ -503,7 +509,7 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
cf = (struct can_frame *)skb->data;
hf->can_id = cf->can_id;
hf->can_id = cpu_to_le32(cf->can_id);
hf->can_dlc = cf->can_dlc;
memcpy(hf->data, cf->data, cf->can_dlc);
@ -573,6 +579,7 @@ static int gs_can_open(struct net_device *netdev)
int rc, i;
struct gs_device_mode *dm;
u32 ctrlmode;
u32 flags = 0;
rc = open_candev(netdev);
if (rc)
@ -640,24 +647,24 @@ static int gs_can_open(struct net_device *netdev)
/* flags */
ctrlmode = dev->can.ctrlmode;
dm->flags = 0;
if (ctrlmode & CAN_CTRLMODE_LOOPBACK)
dm->flags |= GS_CAN_MODE_LOOP_BACK;
flags |= GS_CAN_MODE_LOOP_BACK;
else if (ctrlmode & CAN_CTRLMODE_LISTENONLY)
dm->flags |= GS_CAN_MODE_LISTEN_ONLY;
flags |= GS_CAN_MODE_LISTEN_ONLY;
/* Controller is not allowed to retry TX
* this mode is unavailable on atmels uc3c hardware
*/
if (ctrlmode & CAN_CTRLMODE_ONE_SHOT)
dm->flags |= GS_CAN_MODE_ONE_SHOT;
flags |= GS_CAN_MODE_ONE_SHOT;
if (ctrlmode & CAN_CTRLMODE_3_SAMPLES)
dm->flags |= GS_CAN_MODE_TRIPLE_SAMPLE;
flags |= GS_CAN_MODE_TRIPLE_SAMPLE;
/* finally start device */
dm->mode = GS_CAN_MODE_START;
dm->mode = cpu_to_le32(GS_CAN_MODE_START);
dm->flags = cpu_to_le32(flags);
rc = usb_control_msg(interface_to_usbdev(dev->iface),
usb_sndctrlpipe(interface_to_usbdev(dev->iface), 0),
GS_USB_BREQ_MODE,
@ -737,9 +744,9 @@ static int gs_usb_set_identify(struct net_device *netdev, bool do_identify)
return -ENOMEM;
if (do_identify)
imode->mode = GS_CAN_IDENTIFY_ON;
imode->mode = cpu_to_le32(GS_CAN_IDENTIFY_ON);
else
imode->mode = GS_CAN_IDENTIFY_OFF;
imode->mode = cpu_to_le32(GS_CAN_IDENTIFY_OFF);
rc = usb_control_msg(interface_to_usbdev(dev->iface),
usb_sndctrlpipe(interface_to_usbdev(dev->iface),
@ -790,6 +797,7 @@ static struct gs_can *gs_make_candev(unsigned int channel,
struct net_device *netdev;
int rc;
struct gs_device_bt_const *bt_const;
u32 feature;
bt_const = kmalloc(sizeof(*bt_const), GFP_KERNEL);
if (!bt_const)
@ -830,14 +838,14 @@ static struct gs_can *gs_make_candev(unsigned int channel,
/* dev setup */
strcpy(dev->bt_const.name, "gs_usb");
dev->bt_const.tseg1_min = bt_const->tseg1_min;
dev->bt_const.tseg1_max = bt_const->tseg1_max;
dev->bt_const.tseg2_min = bt_const->tseg2_min;
dev->bt_const.tseg2_max = bt_const->tseg2_max;
dev->bt_const.sjw_max = bt_const->sjw_max;
dev->bt_const.brp_min = bt_const->brp_min;
dev->bt_const.brp_max = bt_const->brp_max;
dev->bt_const.brp_inc = bt_const->brp_inc;
dev->bt_const.tseg1_min = le32_to_cpu(bt_const->tseg1_min);
dev->bt_const.tseg1_max = le32_to_cpu(bt_const->tseg1_max);
dev->bt_const.tseg2_min = le32_to_cpu(bt_const->tseg2_min);
dev->bt_const.tseg2_max = le32_to_cpu(bt_const->tseg2_max);
dev->bt_const.sjw_max = le32_to_cpu(bt_const->sjw_max);
dev->bt_const.brp_min = le32_to_cpu(bt_const->brp_min);
dev->bt_const.brp_max = le32_to_cpu(bt_const->brp_max);
dev->bt_const.brp_inc = le32_to_cpu(bt_const->brp_inc);
dev->udev = interface_to_usbdev(intf);
dev->iface = intf;
@ -854,28 +862,29 @@ static struct gs_can *gs_make_candev(unsigned int channel,
/* can setup */
dev->can.state = CAN_STATE_STOPPED;
dev->can.clock.freq = bt_const->fclk_can;
dev->can.clock.freq = le32_to_cpu(bt_const->fclk_can);
dev->can.bittiming_const = &dev->bt_const;
dev->can.do_set_bittiming = gs_usb_set_bittiming;
dev->can.ctrlmode_supported = 0;
if (bt_const->feature & GS_CAN_FEATURE_LISTEN_ONLY)
feature = le32_to_cpu(bt_const->feature);
if (feature & GS_CAN_FEATURE_LISTEN_ONLY)
dev->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY;
if (bt_const->feature & GS_CAN_FEATURE_LOOP_BACK)
if (feature & GS_CAN_FEATURE_LOOP_BACK)
dev->can.ctrlmode_supported |= CAN_CTRLMODE_LOOPBACK;
if (bt_const->feature & GS_CAN_FEATURE_TRIPLE_SAMPLE)
if (feature & GS_CAN_FEATURE_TRIPLE_SAMPLE)
dev->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
if (bt_const->feature & GS_CAN_FEATURE_ONE_SHOT)
if (feature & GS_CAN_FEATURE_ONE_SHOT)
dev->can.ctrlmode_supported |= CAN_CTRLMODE_ONE_SHOT;
SET_NETDEV_DEV(netdev, &intf->dev);
if (dconf->sw_version > 1)
if (bt_const->feature & GS_CAN_FEATURE_IDENTIFY)
if (le32_to_cpu(dconf->sw_version) > 1)
if (feature & GS_CAN_FEATURE_IDENTIFY)
netdev->ethtool_ops = &gs_usb_ethtool_ops;
kfree(bt_const);
@ -910,7 +919,7 @@ static int gs_usb_probe(struct usb_interface *intf,
if (!hconf)
return -ENOMEM;
hconf->byte_order = 0x0000beef;
hconf->byte_order = cpu_to_le32(0x0000beef);
/* send host config */
rc = usb_control_msg(interface_to_usbdev(intf),

View File

@ -516,6 +516,7 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
{
struct ena_com_rx_buf_info *ena_buf = &ena_rx_ctx->ena_bufs[0];
struct ena_eth_io_rx_cdesc_base *cdesc = NULL;
u16 q_depth = io_cq->q_depth;
u16 cdesc_idx = 0;
u16 nb_hw_desc;
u16 i = 0;
@ -543,6 +544,8 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
do {
ena_buf[i].len = cdesc->length;
ena_buf[i].req_id = cdesc->req_id;
if (unlikely(ena_buf[i].req_id >= q_depth))
return -EIO;
if (++i >= nb_hw_desc)
break;

View File

@ -789,24 +789,6 @@ static void ena_free_all_io_tx_resources(struct ena_adapter *adapter)
adapter->num_io_queues);
}
static int validate_rx_req_id(struct ena_ring *rx_ring, u16 req_id)
{
if (likely(req_id < rx_ring->ring_size))
return 0;
netif_err(rx_ring->adapter, rx_err, rx_ring->netdev,
"Invalid rx req_id: %hu\n", req_id);
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->rx_stats.bad_req_id++;
u64_stats_update_end(&rx_ring->syncp);
/* Trigger device reset */
rx_ring->adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID;
set_bit(ENA_FLAG_TRIGGER_RESET, &rx_ring->adapter->flags);
return -EFAULT;
}
/* ena_setup_rx_resources - allocate I/O Rx resources (Descriptors)
* @adapter: network interface device structure
* @qid: queue index
@ -926,10 +908,14 @@ static void ena_free_all_io_rx_resources(struct ena_adapter *adapter)
static int ena_alloc_rx_page(struct ena_ring *rx_ring,
struct ena_rx_buffer *rx_info, gfp_t gfp)
{
int headroom = rx_ring->rx_headroom;
struct ena_com_buf *ena_buf;
struct page *page;
dma_addr_t dma;
/* restore page offset value in case it has been changed by device */
rx_info->page_offset = headroom;
/* if previous allocated page is not used */
if (unlikely(rx_info->page))
return 0;
@ -959,10 +945,9 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring,
"Allocate page %p, rx_info %p\n", page, rx_info);
rx_info->page = page;
rx_info->page_offset = 0;
ena_buf = &rx_info->ena_buf;
ena_buf->paddr = dma + rx_ring->rx_headroom;
ena_buf->len = ENA_PAGE_SIZE - rx_ring->rx_headroom;
ena_buf->paddr = dma + headroom;
ena_buf->len = ENA_PAGE_SIZE - headroom;
return 0;
}
@ -1356,15 +1341,10 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
struct ena_rx_buffer *rx_info;
u16 len, req_id, buf = 0;
void *va;
int rc;
len = ena_bufs[buf].len;
req_id = ena_bufs[buf].req_id;
rc = validate_rx_req_id(rx_ring, req_id);
if (unlikely(rc < 0))
return NULL;
rx_info = &rx_ring->rx_buffer_info[req_id];
if (unlikely(!rx_info->page)) {
@ -1379,7 +1359,8 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
/* save virt address of first buffer */
va = page_address(rx_info->page) + rx_info->page_offset;
prefetch(va + NET_IP_ALIGN);
prefetch(va);
if (len <= rx_ring->rx_copybreak) {
skb = ena_alloc_skb(rx_ring, false);
@ -1420,8 +1401,6 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page,
rx_info->page_offset, len, ENA_PAGE_SIZE);
/* The offset is non zero only for the first buffer */
rx_info->page_offset = 0;
netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev,
"RX skb updated. len %d. data_len %d\n",
@ -1440,10 +1419,6 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
len = ena_bufs[buf].len;
req_id = ena_bufs[buf].req_id;
rc = validate_rx_req_id(rx_ring, req_id);
if (unlikely(rc < 0))
return NULL;
rx_info = &rx_ring->rx_buffer_info[req_id];
} while (1);
@ -1544,8 +1519,7 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
int ret;
rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id];
xdp->data = page_address(rx_info->page) +
rx_info->page_offset + rx_ring->rx_headroom;
xdp->data = page_address(rx_info->page) + rx_info->page_offset;
xdp_set_data_meta_invalid(xdp);
xdp->data_hard_start = page_address(rx_info->page);
xdp->data_end = xdp->data + rx_ring->ena_bufs[0].len;
@ -1612,8 +1586,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
if (unlikely(ena_rx_ctx.descs == 0))
break;
/* First descriptor might have an offset set by the device */
rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id];
rx_info->page_offset = ena_rx_ctx.pkt_offset;
rx_info->page_offset += ena_rx_ctx.pkt_offset;
netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev,
"rx_poll: q %d got packet from ena. descs #: %d l3 proto %d l4 proto %d hash: %x\n",
@ -1697,12 +1672,18 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
error:
adapter = netdev_priv(rx_ring->netdev);
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->rx_stats.bad_desc_num++;
u64_stats_update_end(&rx_ring->syncp);
if (rc == -ENOSPC) {
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->rx_stats.bad_desc_num++;
u64_stats_update_end(&rx_ring->syncp);
adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS;
} else {
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->rx_stats.bad_req_id++;
u64_stats_update_end(&rx_ring->syncp);
adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID;
}
/* Too many desc from the device. Trigger reset */
adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS;
set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
return 0;
@ -3388,16 +3369,9 @@ static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
goto err_mmio_read_less;
}
rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_width));
rc = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(dma_width));
if (rc) {
dev_err(dev, "pci_set_dma_mask failed 0x%x\n", rc);
goto err_mmio_read_less;
}
rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(dma_width));
if (rc) {
dev_err(dev, "err_pci_set_consistent_dma_mask failed 0x%x\n",
rc);
dev_err(dev, "dma_set_mask_and_coherent failed %d\n", rc);
goto err_mmio_read_less;
}
@ -4167,6 +4141,12 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return rc;
}
rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(ENA_MAX_PHYS_ADDR_SIZE_BITS));
if (rc) {
dev_err(&pdev->dev, "dma_set_mask_and_coherent failed %d\n", rc);
goto err_disable_device;
}
pci_set_master(pdev);
ena_dev = vzalloc(sizeof(*ena_dev));

View File

@ -413,85 +413,63 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
buff->rxdata.pg_off,
buff->len, DMA_FROM_DEVICE);
/* for single fragment packets use build_skb() */
if (buff->is_eop &&
buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
skb = build_skb(aq_buf_vaddr(&buff->rxdata),
skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);
if (unlikely(!skb)) {
u64_stats_update_begin(&self->stats.rx.syncp);
self->stats.rx.skb_alloc_fails++;
u64_stats_update_end(&self->stats.rx.syncp);
err = -ENOMEM;
goto err_exit;
}
if (is_ptp_ring)
buff->len -=
aq_ptp_extract_ts(self->aq_nic, skb,
aq_buf_vaddr(&buff->rxdata),
buff->len);
hdr_len = buff->len;
if (hdr_len > AQ_CFG_RX_HDR_SIZE)
hdr_len = eth_get_headlen(skb->dev,
aq_buf_vaddr(&buff->rxdata),
AQ_CFG_RX_HDR_SIZE);
memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata),
ALIGN(hdr_len, sizeof(long)));
if (buff->len - hdr_len > 0) {
skb_add_rx_frag(skb, 0, buff->rxdata.page,
buff->rxdata.pg_off + hdr_len,
buff->len - hdr_len,
AQ_CFG_RX_FRAME_MAX);
if (unlikely(!skb)) {
u64_stats_update_begin(&self->stats.rx.syncp);
self->stats.rx.skb_alloc_fails++;
u64_stats_update_end(&self->stats.rx.syncp);
err = -ENOMEM;
goto err_exit;
}
if (is_ptp_ring)
buff->len -=
aq_ptp_extract_ts(self->aq_nic, skb,
aq_buf_vaddr(&buff->rxdata),
buff->len);
skb_put(skb, buff->len);
page_ref_inc(buff->rxdata.page);
} else {
skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE);
if (unlikely(!skb)) {
u64_stats_update_begin(&self->stats.rx.syncp);
self->stats.rx.skb_alloc_fails++;
u64_stats_update_end(&self->stats.rx.syncp);
err = -ENOMEM;
goto err_exit;
}
if (is_ptp_ring)
buff->len -=
aq_ptp_extract_ts(self->aq_nic, skb,
aq_buf_vaddr(&buff->rxdata),
buff->len);
}
hdr_len = buff->len;
if (hdr_len > AQ_CFG_RX_HDR_SIZE)
hdr_len = eth_get_headlen(skb->dev,
aq_buf_vaddr(&buff->rxdata),
AQ_CFG_RX_HDR_SIZE);
if (!buff->is_eop) {
buff_ = buff;
i = 1U;
do {
next_ = buff_->next;
buff_ = &self->buff_ring[next_];
memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata),
ALIGN(hdr_len, sizeof(long)));
if (buff->len - hdr_len > 0) {
skb_add_rx_frag(skb, 0, buff->rxdata.page,
buff->rxdata.pg_off + hdr_len,
buff->len - hdr_len,
dma_sync_single_range_for_cpu(aq_nic_get_dev(self->aq_nic),
buff_->rxdata.daddr,
buff_->rxdata.pg_off,
buff_->len,
DMA_FROM_DEVICE);
skb_add_rx_frag(skb, i++,
buff_->rxdata.page,
buff_->rxdata.pg_off,
buff_->len,
AQ_CFG_RX_FRAME_MAX);
page_ref_inc(buff->rxdata.page);
}
page_ref_inc(buff_->rxdata.page);
buff_->is_cleaned = 1;
if (!buff->is_eop) {
buff_ = buff;
i = 1U;
do {
next_ = buff_->next,
buff_ = &self->buff_ring[next_];
buff->is_ip_cso &= buff_->is_ip_cso;
buff->is_udp_cso &= buff_->is_udp_cso;
buff->is_tcp_cso &= buff_->is_tcp_cso;
buff->is_cso_err |= buff_->is_cso_err;
dma_sync_single_range_for_cpu(
aq_nic_get_dev(self->aq_nic),
buff_->rxdata.daddr,
buff_->rxdata.pg_off,
buff_->len,
DMA_FROM_DEVICE);
skb_add_rx_frag(skb, i++,
buff_->rxdata.page,
buff_->rxdata.pg_off,
buff_->len,
AQ_CFG_RX_FRAME_MAX);
page_ref_inc(buff_->rxdata.page);
buff_->is_cleaned = 1;
buff->is_ip_cso &= buff_->is_ip_cso;
buff->is_udp_cso &= buff_->is_udp_cso;
buff->is_tcp_cso &= buff_->is_tcp_cso;
buff->is_cso_err |= buff_->is_cso_err;
} while (!buff_->is_eop);
}
} while (!buff_->is_eop);
}
if (buff->is_vlan)

View File

@ -11590,7 +11590,8 @@ static int bnxt_init_board(struct pci_dev *pdev, struct net_device *dev)
if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) != 0 &&
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)) != 0) {
dev_err(&pdev->dev, "System does not support DMA, aborting\n");
goto init_err_disable;
rc = -EIO;
goto init_err_release;
}
pci_set_master(pdev);
@ -12674,6 +12675,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
create_singlethread_workqueue("bnxt_pf_wq");
if (!bnxt_pf_wq) {
dev_err(&pdev->dev, "Unable to create workqueue.\n");
rc = -ENOMEM;
goto init_err_pci_clean;
}
}

View File

@ -68,7 +68,7 @@ config CHELSIO_T3
config CHELSIO_T4
tristate "Chelsio Communications T4/T5/T6 Ethernet support"
depends on PCI && (IPV6 || IPV6=n)
depends on PCI && (IPV6 || IPV6=n) && (TLS || TLS=n)
select FW_LOADER
select MDIO
select ZLIB_DEFLATE

View File

@ -880,7 +880,8 @@ int set_filter_wr(struct adapter *adapter, int fidx)
FW_FILTER_WR_OVLAN_VLD_V(f->fs.val.ovlan_vld) |
FW_FILTER_WR_IVLAN_VLDM_V(f->fs.mask.ivlan_vld) |
FW_FILTER_WR_OVLAN_VLDM_V(f->fs.mask.ovlan_vld));
fwr->smac_sel = f->smt->idx;
if (f->fs.newsmac)
fwr->smac_sel = f->smt->idx;
fwr->rx_chan_rx_rpl_iq =
htons(FW_FILTER_WR_RX_CHAN_V(0) |
FW_FILTER_WR_RX_RPL_IQ_V(adapter->sge.fw_evtq.abs_id));

View File

@ -544,7 +544,9 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
/* need to wait for hw response, can't free tx_info yet. */
if (tx_info->open_state == CH_KTLS_OPEN_PENDING)
tx_info->pending_close = true;
/* free the lock after the cleanup */
else
spin_unlock_bh(&tx_info->lock);
/* if in pending close, free the lock after the cleanup */
goto put_module;
}
spin_unlock_bh(&tx_info->lock);

View File

@ -4,6 +4,8 @@ config FSL_DPAA2_ETH
depends on FSL_MC_BUS && FSL_MC_DPIO
select PHYLINK
select PCS_LYNX
select FSL_XGMAC_MDIO
select NET_DEVLINK
help
This is the DPAA2 Ethernet driver supporting Freescale SoCs
with DPAA2 (DataPath Acceleration Architecture v2).

View File

@ -92,18 +92,8 @@ static int enetc_setup_taprio(struct net_device *ndev,
gcl_config->atc = 0xff;
gcl_config->acl_len = cpu_to_le16(gcl_len);
if (!admin_conf->base_time) {
gcl_data->btl =
cpu_to_le32(enetc_rd(&priv->si->hw, ENETC_SICTR0));
gcl_data->bth =
cpu_to_le32(enetc_rd(&priv->si->hw, ENETC_SICTR1));
} else {
gcl_data->btl =
cpu_to_le32(lower_32_bits(admin_conf->base_time));
gcl_data->bth =
cpu_to_le32(upper_32_bits(admin_conf->base_time));
}
gcl_data->btl = cpu_to_le32(lower_32_bits(admin_conf->base_time));
gcl_data->bth = cpu_to_le32(upper_32_bits(admin_conf->base_time));
gcl_data->ct = cpu_to_le32(admin_conf->cycle_time);
gcl_data->cte = cpu_to_le32(admin_conf->cycle_time_extension);

View File

@ -2074,8 +2074,11 @@ static int do_reset(struct ibmvnic_adapter *adapter,
for (i = 0; i < adapter->req_rx_queues; i++)
napi_schedule(&adapter->napi[i]);
if (adapter->reset_reason != VNIC_RESET_FAILOVER)
if (adapter->reset_reason == VNIC_RESET_FAILOVER ||
adapter->reset_reason == VNIC_RESET_MOBILITY) {
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev);
call_netdevice_notifiers(NETDEV_RESEND_IGMP, netdev);
}
rc = 0;
@ -2145,6 +2148,9 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
if (rc)
return IBMVNIC_OPEN_FAILED;
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev);
call_netdevice_notifiers(NETDEV_RESEND_IGMP, netdev);
return 0;
}
@ -2209,7 +2215,6 @@ static void __ibmvnic_reset(struct work_struct *work)
if (!saved_state) {
reset_state = adapter->state;
adapter->state = VNIC_RESETTING;
saved_state = true;
}
spin_unlock_irqrestore(&adapter->state_lock, flags);
@ -2350,6 +2355,12 @@ static void ibmvnic_tx_timeout(struct net_device *dev, unsigned int txqueue)
{
struct ibmvnic_adapter *adapter = netdev_priv(dev);
if (test_bit(0, &adapter->resetting)) {
netdev_err(adapter->netdev,
"Adapter is resetting, skip timeout reset\n");
return;
}
ibmvnic_reset(adapter, VNIC_RESET_TIMEOUT);
}
@ -2868,6 +2879,9 @@ static int reset_sub_crq_queues(struct ibmvnic_adapter *adapter)
{
int i, rc;
if (!adapter->tx_scrq || !adapter->rx_scrq)
return -EINVAL;
for (i = 0; i < adapter->req_tx_queues; i++) {
netdev_dbg(adapter->netdev, "Re-setting tx_scrq[%d]\n", i);
rc = reset_one_sub_crq_queue(adapter, adapter->tx_scrq[i]);
@ -4958,6 +4972,9 @@ static int ibmvnic_reset_crq(struct ibmvnic_adapter *adapter)
} while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
/* Clean out the queue */
if (!crq->msgs)
return -EINVAL;
memset(crq->msgs, 0, PAGE_SIZE);
crq->cur = 0;
crq->active = false;
@ -5262,7 +5279,7 @@ static int ibmvnic_remove(struct vio_dev *dev)
unsigned long flags;
spin_lock_irqsave(&adapter->state_lock, flags);
if (adapter->state == VNIC_RESETTING) {
if (test_bit(0, &adapter->resetting)) {
spin_unlock_irqrestore(&adapter->state_lock, flags);
return -EBUSY;
}

View File

@ -942,8 +942,7 @@ enum vnic_state {VNIC_PROBING = 1,
VNIC_CLOSING,
VNIC_CLOSED,
VNIC_REMOVING,
VNIC_REMOVED,
VNIC_RESETTING};
VNIC_REMOVED};
enum ibmvnic_reset_reason {VNIC_RESET_FAILOVER = 1,
VNIC_RESET_MOBILITY,

View File

@ -140,6 +140,7 @@ enum i40e_state_t {
__I40E_CLIENT_RESET,
__I40E_VIRTCHNL_OP_PENDING,
__I40E_RECOVERY_MODE,
__I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */
/* This must be last as it determines the size of the BITMAP */
__I40E_STATE_SIZE__,
};

View File

@ -4010,8 +4010,16 @@ static irqreturn_t i40e_intr(int irq, void *data)
}
if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) {
ena_mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK;
set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);
/* disable any further VFLR event notifications */
if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state)) {
u32 reg = rd32(hw, I40E_PFINT_ICR0_ENA);
reg &= ~I40E_PFINT_ICR0_VFLR_MASK;
wr32(hw, I40E_PFINT_ICR0_ENA, reg);
} else {
ena_mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK;
set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);
}
}
if (icr0 & I40E_PFINT_ICR0_GRST_MASK) {
@ -15311,6 +15319,11 @@ static void i40e_remove(struct pci_dev *pdev)
while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))
usleep_range(1000, 2000);
if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {
set_bit(__I40E_VF_RESETS_DISABLED, pf->state);
i40e_free_vfs(pf);
pf->flags &= ~I40E_FLAG_SRIOV_ENABLED;
}
/* no more scheduling of any task */
set_bit(__I40E_SUSPENDED, pf->state);
set_bit(__I40E_DOWN, pf->state);
@ -15337,11 +15350,6 @@ static void i40e_remove(struct pci_dev *pdev)
*/
i40e_notify_client_of_netdev_close(pf->vsi[pf->lan_vsi], false);
if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {
i40e_free_vfs(pf);
pf->flags &= ~I40E_FLAG_SRIOV_ENABLED;
}
i40e_fdir_teardown(pf);
/* If there is a switch structure or any orphans, remove them.

View File

@ -1403,7 +1403,8 @@ static void i40e_cleanup_reset_vf(struct i40e_vf *vf)
* @vf: pointer to the VF structure
* @flr: VFLR was issued or not
*
* Returns true if the VF is reset, false otherwise.
* Returns true if the VF is in reset, resets successfully, or resets
* are disabled and false otherwise.
**/
bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
{
@ -1413,11 +1414,14 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
u32 reg;
int i;
if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state))
return true;
/* If the VFs have been disabled, this means something else is
* resetting the VF, so we shouldn't continue.
*/
if (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
return false;
return true;
i40e_trigger_vf_reset(vf, flr);
@ -1581,6 +1585,15 @@ void i40e_free_vfs(struct i40e_pf *pf)
i40e_notify_client_of_vf_enable(pf, 0);
/* Disable IOV before freeing resources. This lets any VF drivers
* running in the host get themselves cleaned up before we yank
* the carpet out from underneath their feet.
*/
if (!pci_vfs_assigned(pf->pdev))
pci_disable_sriov(pf->pdev);
else
dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n");
/* Amortize wait time by stopping all VFs at the same time */
for (i = 0; i < pf->num_alloc_vfs; i++) {
if (test_bit(I40E_VF_STATE_INIT, &pf->vf[i].vf_states))
@ -1596,15 +1609,6 @@ void i40e_free_vfs(struct i40e_pf *pf)
i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[i].lan_vsi_idx]);
}
/* Disable IOV before freeing resources. This lets any VF drivers
* running in the host get themselves cleaned up before we yank
* the carpet out from underneath their feet.
*/
if (!pci_vfs_assigned(pf->pdev))
pci_disable_sriov(pf->pdev);
else
dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n");
/* free up VF resources */
tmp = pf->num_alloc_vfs;
pf->num_alloc_vfs = 0;

View File

@ -1171,7 +1171,6 @@ const struct stmmac_ops dwmac4_ops = {
.pcs_get_adv_lp = dwmac4_get_adv_lp,
.debug = dwmac4_debug,
.set_filter = dwmac4_set_filter,
.flex_pps_config = dwmac5_flex_pps_config,
.set_mac_loopback = dwmac4_set_mac_loopback,
.update_vlan_hash = dwmac4_update_vlan_hash,
.sarc_configure = dwmac4_sarc_configure,
@ -1213,6 +1212,7 @@ const struct stmmac_ops dwmac410_ops = {
.pcs_get_adv_lp = dwmac4_get_adv_lp,
.debug = dwmac4_debug,
.set_filter = dwmac4_set_filter,
.flex_pps_config = dwmac5_flex_pps_config,
.set_mac_loopback = dwmac4_set_mac_loopback,
.update_vlan_hash = dwmac4_update_vlan_hash,
.sarc_configure = dwmac4_sarc_configure,

View File

@ -1961,12 +1961,15 @@ static ssize_t tun_chr_write_iter(struct kiocb *iocb, struct iov_iter *from)
struct tun_file *tfile = file->private_data;
struct tun_struct *tun = tun_get(tfile);
ssize_t result;
int noblock = 0;
if (!tun)
return -EBADFD;
result = tun_get_user(tun, tfile, NULL, from,
file->f_flags & O_NONBLOCK, false);
if ((file->f_flags & O_NONBLOCK) || (iocb->ki_flags & IOCB_NOWAIT))
noblock = 1;
result = tun_get_user(tun, tfile, NULL, from, noblock, false);
tun_put(tun);
return result;
@ -2185,10 +2188,15 @@ static ssize_t tun_chr_read_iter(struct kiocb *iocb, struct iov_iter *to)
struct tun_file *tfile = file->private_data;
struct tun_struct *tun = tun_get(tfile);
ssize_t len = iov_iter_count(to), ret;
int noblock = 0;
if (!tun)
return -EBADFD;
ret = tun_do_read(tun, tfile, to, file->f_flags & O_NONBLOCK, NULL);
if ((file->f_flags & O_NONBLOCK) || (iocb->ki_flags & IOCB_NOWAIT))
noblock = 1;
ret = tun_do_read(tun, tfile, to, noblock, NULL);
ret = min_t(ssize_t, ret, len);
if (ret > 0)
iocb->ki_pos = ret;

View File

@ -59,7 +59,7 @@
#define IPHETH_USBINTF_SUBCLASS 253
#define IPHETH_USBINTF_PROTO 1
#define IPHETH_BUF_SIZE 1516
#define IPHETH_BUF_SIZE 1514
#define IPHETH_IP_ALIGN 2 /* padding at front of URB */
#define IPHETH_TX_TIMEOUT (5 * HZ)

View File

@ -5,10 +5,9 @@
*
* GPL LICENSE SUMMARY
*
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 - 2019 Intel Corporation
* Copyright(c) 2012-2014, 2018 - 2020 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -28,10 +27,9 @@
*
* BSD LICENSE
*
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 - 2019 Intel Corporation
* Copyright(c) 2012-2014, 2018 - 2020 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -128,7 +126,9 @@ enum iwl_sta_flags {
STA_FLG_MAX_AGG_SIZE_256K = (5 << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_MAX_AGG_SIZE_512K = (6 << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_MAX_AGG_SIZE_1024K = (7 << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_MAX_AGG_SIZE_MSK = (7 << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_MAX_AGG_SIZE_2M = (8 << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_MAX_AGG_SIZE_4M = (9 << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_MAX_AGG_SIZE_MSK = (0xf << STA_FLG_MAX_AGG_SIZE_SHIFT),
STA_FLG_AGG_MPDU_DENS_SHIFT = 23,
STA_FLG_AGG_MPDU_DENS_2US = (4 << STA_FLG_AGG_MPDU_DENS_SHIFT),

View File

@ -8,7 +8,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 - 2019 Intel Corporation
* Copyright(c) 2018 - 2020 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -31,7 +31,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 - 2019 Intel Corporation
* Copyright(c) 2018 - 2020 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -421,12 +421,14 @@ struct iwl_hs20_roc_res {
* able to run the GO Negotiation. Will not be fragmented and not
* repetitive. Valid only on the P2P Device MAC. Only the duration will
* be taken into account.
* @SESSION_PROTECT_CONF_MAX_ID: not used
*/
enum iwl_mvm_session_prot_conf_id {
SESSION_PROTECT_CONF_ASSOC,
SESSION_PROTECT_CONF_GO_CLIENT_ASSOC,
SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV,
SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION,
SESSION_PROTECT_CONF_MAX_ID,
}; /* SESSION_PROTECTION_CONF_ID_E_VER_1 */
/**
@ -459,7 +461,7 @@ struct iwl_mvm_session_prot_cmd {
* @mac_id: the mac id for which the session protection started / ended
* @status: 1 means success, 0 means failure
* @start: 1 means the session protection started, 0 means it ended
* @conf_id: the configuration id of the session that started / eneded
* @conf_id: see &enum iwl_mvm_session_prot_conf_id
*
* Note that any session protection will always get two notifications: start
* and end even the firmware could not schedule it.

View File

@ -147,6 +147,16 @@
#define CSR_MAC_SHADOW_REG_CTL2 (CSR_BASE + 0x0AC)
#define CSR_MAC_SHADOW_REG_CTL2_RX_WAKE 0xFFFF
/* LTR control (since IWL_DEVICE_FAMILY_22000) */
#define CSR_LTR_LONG_VAL_AD (CSR_BASE + 0x0D4)
#define CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ 0x80000000
#define CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE 0x1c000000
#define CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL 0x03ff0000
#define CSR_LTR_LONG_VAL_AD_SNOOP_REQ 0x00008000
#define CSR_LTR_LONG_VAL_AD_SNOOP_SCALE 0x00001c00
#define CSR_LTR_LONG_VAL_AD_SNOOP_VAL 0x000003ff
#define CSR_LTR_LONG_VAL_AD_SCALE_USEC 2
/* GIO Chicken Bits (PCI Express bus link power management) */
#define CSR_GIO_CHICKEN_BITS (CSR_BASE+0x100)

View File

@ -3080,7 +3080,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
/* this would be a mac80211 bug ... but don't crash */
if (WARN_ON_ONCE(!mvmvif->phy_ctxt))
return -EINVAL;
return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) ? 0 : -EINVAL;
/*
* If we are in a STA removal flow and in DQA mode:
@ -3127,6 +3127,9 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
goto out_unlock;
}
if (vif->type == NL80211_IFTYPE_STATION)
vif->bss_conf.he_support = sta->he_cap.has_he;
if (sta->tdls &&
(vif->p2p ||
iwl_mvm_tdls_sta_count(mvm, NULL) ==

View File

@ -196,6 +196,7 @@ int iwl_mvm_sta_send_to_fw(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
mpdu_dens = sta->ht_cap.ampdu_density;
}
if (sta->vht_cap.vht_supported) {
agg_size = sta->vht_cap.cap &
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
@ -205,6 +206,23 @@ int iwl_mvm_sta_send_to_fw(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
agg_size = sta->ht_cap.ampdu_factor;
}
/* D6.0 10.12.2 A-MPDU length limit rules
* A STA indicates the maximum length of the A-MPDU preEOF padding
* that it can receive in an HE PPDU in the Maximum A-MPDU Length
* Exponent field in its HT Capabilities, VHT Capabilities,
* and HE 6 GHz Band Capabilities elements (if present) and the
* Maximum AMPDU Length Exponent Extension field in its HE
* Capabilities element
*/
if (sta->he_cap.has_he)
agg_size += u8_get_bits(sta->he_cap.he_cap_elem.mac_cap_info[3],
IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK);
/* Limit to max A-MPDU supported by FW */
if (agg_size > (STA_FLG_MAX_AGG_SIZE_4M >> STA_FLG_MAX_AGG_SIZE_SHIFT))
agg_size = (STA_FLG_MAX_AGG_SIZE_4M >>
STA_FLG_MAX_AGG_SIZE_SHIFT);
add_sta_cmd.station_flags |=
cpu_to_le32(agg_size << STA_FLG_MAX_AGG_SIZE_SHIFT);
add_sta_cmd.station_flags |=

View File

@ -641,11 +641,32 @@ void iwl_mvm_protect_session(struct iwl_mvm *mvm,
}
}
static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif)
{
struct iwl_mvm_session_prot_cmd cmd = {
.id_and_color =
cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
mvmvif->color)),
.action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),
.conf_id = cpu_to_le32(mvmvif->time_event_data.id),
};
int ret;
ret = iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD,
MAC_CONF_GROUP, 0),
0, sizeof(cmd), &cmd);
if (ret)
IWL_ERR(mvm,
"Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret);
}
static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
struct iwl_mvm_time_event_data *te_data,
u32 *uid)
{
u32 id;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
/*
* It is possible that by the time we got to this point the time
@ -663,14 +684,29 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
iwl_mvm_te_clear_data(mvm, te_data);
spin_unlock_bh(&mvm->time_event_lock);
/*
* It is possible that by the time we try to remove it, the time event
* has already ended and removed. In such a case there is no need to
* send a removal command.
/* When session protection is supported, the te_data->id field
* is reused to save session protection's configuration.
*/
if (id == TE_MAX) {
IWL_DEBUG_TE(mvm, "TE 0x%x has already ended\n", *uid);
if (fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) {
if (mvmvif && id < SESSION_PROTECT_CONF_MAX_ID) {
/* Session protection is still ongoing. Cancel it */
iwl_mvm_cancel_session_protection(mvm, mvmvif);
if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) {
set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
iwl_mvm_roc_finished(mvm);
}
}
return false;
} else {
/* It is possible that by the time we try to remove it, the
* time event has already ended and removed. In such a case
* there is no need to send a removal command.
*/
if (id == TE_MAX) {
IWL_DEBUG_TE(mvm, "TE 0x%x has already ended\n", *uid);
return false;
}
}
return true;
@ -771,6 +807,7 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
struct iwl_rx_packet *pkt = rxb_addr(rxb);
struct iwl_mvm_session_prot_notif *notif = (void *)pkt->data;
struct ieee80211_vif *vif;
struct iwl_mvm_vif *mvmvif;
rcu_read_lock();
vif = iwl_mvm_rcu_dereference_vif_id(mvm, le32_to_cpu(notif->mac_id),
@ -779,9 +816,10 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
if (!vif)
goto out_unlock;
mvmvif = iwl_mvm_vif_from_mac80211(vif);
/* The vif is not a P2P_DEVICE, maintain its time_event_data */
if (vif->type != NL80211_IFTYPE_P2P_DEVICE) {
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_time_event_data *te_data =
&mvmvif->time_event_data;
@ -816,10 +854,14 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
if (!le32_to_cpu(notif->status) || !le32_to_cpu(notif->start)) {
/* End TE, notify mac80211 */
mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;
ieee80211_remain_on_channel_expired(mvm->hw);
set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
iwl_mvm_roc_finished(mvm);
} else if (le32_to_cpu(notif->start)) {
if (WARN_ON(mvmvif->time_event_data.id !=
le32_to_cpu(notif->conf_id)))
goto out_unlock;
set_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status);
ieee80211_ready_on_channel(mvm->hw); /* Start TE */
}
@ -845,20 +887,24 @@ iwl_mvm_start_p2p_roc_session_protection(struct iwl_mvm *mvm,
lockdep_assert_held(&mvm->mutex);
/* The time_event_data.id field is reused to save session
* protection's configuration.
*/
switch (type) {
case IEEE80211_ROC_TYPE_NORMAL:
cmd.conf_id =
cpu_to_le32(SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV);
mvmvif->time_event_data.id =
SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV;
break;
case IEEE80211_ROC_TYPE_MGMT_TX:
cmd.conf_id =
cpu_to_le32(SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION);
mvmvif->time_event_data.id =
SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION;
break;
default:
WARN_ONCE(1, "Got an invalid ROC type\n");
return -EINVAL;
}
cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);
return iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD,
MAC_CONF_GROUP, 0),
0, sizeof(cmd), &cmd);
@ -960,25 +1006,6 @@ void iwl_mvm_cleanup_roc_te(struct iwl_mvm *mvm)
__iwl_mvm_remove_time_event(mvm, te_data, &uid);
}
static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif)
{
struct iwl_mvm_session_prot_cmd cmd = {
.id_and_color =
cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
mvmvif->color)),
.action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),
};
int ret;
ret = iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD,
MAC_CONF_GROUP, 0),
0, sizeof(cmd), &cmd);
if (ret)
IWL_ERR(mvm,
"Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret);
}
void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif;
@ -988,10 +1015,13 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) {
mvmvif = iwl_mvm_vif_from_mac80211(vif);
iwl_mvm_cancel_session_protection(mvm, mvmvif);
if (vif->type == NL80211_IFTYPE_P2P_DEVICE)
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
iwl_mvm_cancel_session_protection(mvm, mvmvif);
set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
} else {
iwl_mvm_remove_aux_roc_te(mvm, mvmvif,
&mvmvif->time_event_data);
}
iwl_mvm_roc_finished(mvm);
@ -1126,10 +1156,15 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
mvmvif->color)),
.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
.conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC),
.duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),
};
/* The time_event_data.id field is reused to save session
* protection's configuration.
*/
mvmvif->time_event_data.id = SESSION_PROTECT_CONF_ASSOC;
cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);
lockdep_assert_held(&mvm->mutex);
spin_lock_bh(&mvm->time_event_lock);

View File

@ -252,6 +252,26 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,
CSR_AUTO_FUNC_BOOT_ENA);
if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) {
/*
* The firmware initializes this again later (to a smaller
* value), but for the boot process initialize the LTR to
* ~250 usec.
*/
u32 val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
u32_encode_bits(250,
CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
iwl_write32(trans, CSR_LTR_LONG_VAL_AD, val);
}
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
else

View File

@ -2156,18 +2156,36 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
void *buf, int dwords)
{
unsigned long flags;
int offs, ret = 0;
int offs = 0;
u32 *vals = buf;
if (iwl_trans_grab_nic_access(trans, &flags)) {
iwl_write32(trans, HBUS_TARG_MEM_RADDR, addr);
for (offs = 0; offs < dwords; offs++)
vals[offs] = iwl_read32(trans, HBUS_TARG_MEM_RDAT);
iwl_trans_release_nic_access(trans, &flags);
} else {
ret = -EBUSY;
while (offs < dwords) {
/* limit the time we spin here under lock to 1/2s */
ktime_t timeout = ktime_add_us(ktime_get(), 500 * USEC_PER_MSEC);
if (iwl_trans_grab_nic_access(trans, &flags)) {
iwl_write32(trans, HBUS_TARG_MEM_RADDR,
addr + 4 * offs);
while (offs < dwords) {
vals[offs] = iwl_read32(trans,
HBUS_TARG_MEM_RDAT);
offs++;
/* calling ktime_get is expensive so
* do it once in 128 reads
*/
if (offs % 128 == 0 && ktime_after(ktime_get(),
timeout))
break;
}
iwl_trans_release_nic_access(trans, &flags);
} else {
return -EBUSY;
}
}
return ret;
return 0;
}
static int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr,

View File

@ -1482,7 +1482,7 @@ static bool rtw_fw_dump_check_size(struct rtw_dev *rtwdev,
int rtw_fw_dump_fifo(struct rtw_dev *rtwdev, u8 fifo_sel, u32 addr, u32 size,
u32 *buffer)
{
if (!rtwdev->chip->fw_fifo_addr) {
if (!rtwdev->chip->fw_fifo_addr[0]) {
rtw_dbg(rtwdev, RTW_DBG_FW, "chip not support dump fw fifo\n");
return -ENOTSUPP;
}

View File

@ -26,8 +26,8 @@ struct s3fwrn5_i2c_phy {
struct i2c_client *i2c_dev;
struct nci_dev *ndev;
unsigned int gpio_en;
unsigned int gpio_fw_wake;
int gpio_en;
int gpio_fw_wake;
struct mutex mutex;

View File

@ -103,43 +103,26 @@ static int timespec_to_char_array(struct timespec64 const *ts,
return 0;
}
static int idtcm_strverscmp(const char *ver1, const char *ver2)
static int idtcm_strverscmp(const char *version1, const char *version2)
{
u8 num1;
u8 num2;
int result = 0;
u8 ver1[3], ver2[3];
int i;
/* loop through each level of the version string */
while (result == 0) {
/* extract leading version numbers */
if (kstrtou8(ver1, 10, &num1) < 0)
if (sscanf(version1, "%hhu.%hhu.%hhu",
&ver1[0], &ver1[1], &ver1[2]) != 3)
return -1;
if (sscanf(version2, "%hhu.%hhu.%hhu",
&ver2[0], &ver2[1], &ver2[2]) != 3)
return -1;
for (i = 0; i < 3; i++) {
if (ver1[i] > ver2[i])
return 1;
if (ver1[i] < ver2[i])
return -1;
if (kstrtou8(ver2, 10, &num2) < 0)
return -1;
/* if numbers differ, then set the result */
if (num1 < num2)
result = -1;
else if (num1 > num2)
result = 1;
else {
/* if numbers are the same, go to next level */
ver1 = strchr(ver1, '.');
ver2 = strchr(ver2, '.');
if (!ver1 && !ver2)
break;
else if (!ver1)
result = -1;
else if (!ver2)
result = 1;
else {
ver1++;
ver2++;
}
}
}
return result;
return 0;
}
static int idtcm_xfer_read(struct idtcm *idtcm,

View File

@ -417,10 +417,13 @@ enum qeth_qdio_out_buffer_state {
QETH_QDIO_BUF_EMPTY,
/* Filled by driver; owned by hardware in order to be sent. */
QETH_QDIO_BUF_PRIMED,
/* Identified to be pending in TPQ. */
/* Discovered by the TX completion code: */
QETH_QDIO_BUF_PENDING,
/* Found in completion queue. */
QETH_QDIO_BUF_IN_CQ,
/* Finished by the TX completion code: */
QETH_QDIO_BUF_NEED_QAOB,
/* Received QAOB notification on CQ: */
QETH_QDIO_BUF_QAOB_OK,
QETH_QDIO_BUF_QAOB_ERROR,
/* Handled via transfer pending / completion queue. */
QETH_QDIO_BUF_HANDLED_DELAYED,
};

View File

@ -33,6 +33,7 @@
#include <net/iucv/af_iucv.h>
#include <net/dsfield.h>
#include <net/sock.h>
#include <asm/ebcdic.h>
#include <asm/chpid.h>
@ -499,17 +500,12 @@ static void qeth_cleanup_handled_pending(struct qeth_qdio_out_q *q, int bidx,
}
}
if (forced_cleanup && (atomic_read(&(q->bufs[bidx]->state)) ==
QETH_QDIO_BUF_HANDLED_DELAYED)) {
/* for recovery situations */
qeth_init_qdio_out_buf(q, bidx);
QETH_CARD_TEXT(q->card, 2, "clprecov");
}
}
static void qeth_qdio_handle_aob(struct qeth_card *card,
unsigned long phys_aob_addr)
{
enum qeth_qdio_out_buffer_state new_state = QETH_QDIO_BUF_QAOB_OK;
struct qaob *aob;
struct qeth_qdio_out_buffer *buffer;
enum iucv_tx_notify notification;
@ -521,22 +517,6 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
buffer = (struct qeth_qdio_out_buffer *) aob->user1;
QETH_CARD_TEXT_(card, 5, "%lx", aob->user1);
if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED,
QETH_QDIO_BUF_IN_CQ) == QETH_QDIO_BUF_PRIMED) {
notification = TX_NOTIFY_OK;
} else {
WARN_ON_ONCE(atomic_read(&buffer->state) !=
QETH_QDIO_BUF_PENDING);
atomic_set(&buffer->state, QETH_QDIO_BUF_IN_CQ);
notification = TX_NOTIFY_DELAYED_OK;
}
if (aob->aorc != 0) {
QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc);
notification = qeth_compute_cq_notification(aob->aorc, 1);
}
qeth_notify_skbs(buffer->q, buffer, notification);
/* Free dangling allocations. The attached skbs are handled by
* qeth_cleanup_handled_pending().
*/
@ -548,7 +528,33 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
if (data && buffer->is_header[i])
kmem_cache_free(qeth_core_header_cache, data);
}
atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
if (aob->aorc) {
QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc);
new_state = QETH_QDIO_BUF_QAOB_ERROR;
}
switch (atomic_xchg(&buffer->state, new_state)) {
case QETH_QDIO_BUF_PRIMED:
/* Faster than TX completion code. */
notification = qeth_compute_cq_notification(aob->aorc, 0);
qeth_notify_skbs(buffer->q, buffer, notification);
atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
break;
case QETH_QDIO_BUF_PENDING:
/* TX completion code is active and will handle the async
* completion for us.
*/
break;
case QETH_QDIO_BUF_NEED_QAOB:
/* TX completion code is already finished. */
notification = qeth_compute_cq_notification(aob->aorc, 1);
qeth_notify_skbs(buffer->q, buffer, notification);
atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
break;
default:
WARN_ON_ONCE(1);
}
qdio_release_aob(aob);
}
@ -1405,7 +1411,7 @@ static void qeth_notify_skbs(struct qeth_qdio_out_q *q,
skb_queue_walk(&buf->skb_list, skb) {
QETH_CARD_TEXT_(q->card, 5, "skbn%d", notification);
QETH_CARD_TEXT_(q->card, 5, "%lx", (long) skb);
if (skb->protocol == htons(ETH_P_AF_IUCV) && skb->sk)
if (skb->sk && skb->sk->sk_family == PF_IUCV)
iucv_sk(skb->sk)->sk_txnotify(skb, notification);
}
}
@ -1416,9 +1422,6 @@ static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
struct qeth_qdio_out_q *queue = buf->q;
struct sk_buff *skb;
/* release may never happen from within CQ tasklet scope */
WARN_ON_ONCE(atomic_read(&buf->state) == QETH_QDIO_BUF_IN_CQ);
if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING)
qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR);
@ -5869,9 +5872,32 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED,
QETH_QDIO_BUF_PENDING) ==
QETH_QDIO_BUF_PRIMED)
QETH_QDIO_BUF_PRIMED) {
qeth_notify_skbs(queue, buffer, TX_NOTIFY_PENDING);
/* Handle race with qeth_qdio_handle_aob(): */
switch (atomic_xchg(&buffer->state,
QETH_QDIO_BUF_NEED_QAOB)) {
case QETH_QDIO_BUF_PENDING:
/* No concurrent QAOB notification. */
break;
case QETH_QDIO_BUF_QAOB_OK:
qeth_notify_skbs(queue, buffer,
TX_NOTIFY_DELAYED_OK);
atomic_set(&buffer->state,
QETH_QDIO_BUF_HANDLED_DELAYED);
break;
case QETH_QDIO_BUF_QAOB_ERROR:
qeth_notify_skbs(queue, buffer,
TX_NOTIFY_DELAYED_GENERALERROR);
atomic_set(&buffer->state,
QETH_QDIO_BUF_HANDLED_DELAYED);
break;
default:
WARN_ON_ONCE(1);
}
}
QETH_CARD_TEXT_(card, 5, "pel%u", bidx);
/* prepare the queue slot for re-use: */

View File

@ -985,32 +985,19 @@ static void qeth_l2_setup_bridgeport_attrs(struct qeth_card *card)
* change notification' and thus can support the learning_sync bridgeport
* attribute
* @card: qeth_card structure pointer
*
* This is a destructive test and must be called before dev2br or
* bridgeport address notification is enabled!
*/
static void qeth_l2_detect_dev2br_support(struct qeth_card *card)
{
struct qeth_priv *priv = netdev_priv(card->dev);
bool dev2br_supported;
int rc;
QETH_CARD_TEXT(card, 2, "d2brsup");
if (!IS_IQD(card))
return;
/* dev2br requires valid cssid,iid,chid */
if (!card->info.ids_valid) {
dev2br_supported = false;
} else if (css_general_characteristics.enarf) {
dev2br_supported = true;
} else {
/* Old machines don't have the feature bit:
* Probe by testing whether a disable succeeds
*/
rc = qeth_l2_pnso(card, PNSO_OC_NET_ADDR_INFO, 0, NULL, NULL);
dev2br_supported = !rc;
}
dev2br_supported = card->info.ids_valid &&
css_general_characteristics.enarf;
QETH_CARD_TEXT_(card, 2, "D2Bsup%02x", dev2br_supported);
if (dev2br_supported)
@ -2233,7 +2220,6 @@ static int qeth_l2_set_online(struct qeth_card *card, bool carrier_ok)
struct net_device *dev = card->dev;
int rc = 0;
/* query before bridgeport_notification may be enabled */
qeth_l2_detect_dev2br_support(card);
mutex_lock(&card->sbp_lock);

View File

@ -3137,6 +3137,11 @@ static inline bool dev_validate_header(const struct net_device *dev,
return false;
}
static inline bool dev_has_header(const struct net_device *dev)
{
return dev->header_ops && dev->header_ops->create;
}
typedef int gifconf_func_t(struct net_device * dev, char __user * bufptr,
int len, int size);
int register_gifconf(unsigned int family, gifconf_func_t *gifconf);

View File

@ -185,6 +185,11 @@ struct slave {
struct rtnl_link_stats64 slave_stats;
};
static inline struct slave *to_slave(struct kobject *kobj)
{
return container_of(kobj, struct slave, kobj);
}
struct bond_up_slave {
unsigned int count;
struct rcu_head rcu;
@ -750,6 +755,9 @@ extern struct bond_parm_tbl ad_select_tbl[];
/* exported from bond_netlink.c */
extern struct rtnl_link_ops bond_link_ops;
/* exported from bond_sysfs_slave.c */
extern const struct sysfs_ops slave_sysfs_ops;
static inline netdev_tx_t bond_tx_drop(struct net_device *dev, struct sk_buff *skb)
{
atomic_long_inc(&dev->tx_dropped);

View File

@ -247,8 +247,9 @@ void inet_hashinfo2_init(struct inet_hashinfo *h, const char *name,
unsigned long high_limit);
int inet_hashinfo2_init_mod(struct inet_hashinfo *h);
bool inet_ehash_insert(struct sock *sk, struct sock *osk);
bool inet_ehash_nolisten(struct sock *sk, struct sock *osk);
bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk);
bool inet_ehash_nolisten(struct sock *sk, struct sock *osk,
bool *found_dup_sk);
int __inet_hash(struct sock *sk, struct sock *osk);
int inet_hash(struct sock *sk);
void inet_unhash(struct sock *sk);

View File

@ -199,6 +199,12 @@ enum tls_context_flags {
* to be atomic.
*/
TLS_TX_SYNC_SCHED = 1,
/* tls_dev_del was called for the RX side, device state was released,
* but tls_ctx->netdev might still be kept, because TX-side driver
* resources might not be released yet. Used to prevent the second
* tls_dev_del call in tls_device_down if it happens simultaneously.
*/
TLS_RX_DEV_CLOSED = 2,
};
struct cipher_context {

View File

@ -526,6 +526,8 @@ enum devlink_attr {
DEVLINK_ATTR_RELOAD_STATS_LIMIT, /* u8 */
DEVLINK_ATTR_RELOAD_STATS_VALUE, /* u32 */
DEVLINK_ATTR_REMOTE_RELOAD_STATS, /* nested */
DEVLINK_ATTR_RELOAD_ACTION_INFO, /* nested */
DEVLINK_ATTR_RELOAD_ACTION_STATS, /* nested */
/* add new attributes above here, update the policy in devlink.c */

View File

@ -1058,4 +1058,6 @@ enum ovs_dec_ttl_attr {
__OVS_DEC_TTL_ATTR_MAX
};
#define OVS_DEC_TTL_ATTR_MAX (__OVS_DEC_TTL_ATTR_MAX - 1)
#endif /* _LINUX_OPENVSWITCH_H */

View File

@ -180,6 +180,7 @@ static const struct file_operations batadv_log_fops = {
.read = batadv_log_read,
.poll = batadv_log_poll,
.llseek = no_llseek,
.owner = THIS_MODULE,
};
/**

View File

@ -541,10 +541,13 @@ void can_rx_unregister(struct net *net, struct net_device *dev, canid_t can_id,
/* Check for bugs in CAN protocol implementations using af_can.c:
* 'rcv' will be NULL if no matching list item was found for removal.
* As this case may potentially happen when closing a socket while
* the notifier for removing the CAN netdev is running we just print
* a warning here.
*/
if (!rcv) {
WARN(1, "BUG: receive list entry not found for dev %s, id %03X, mask %03X\n",
DNAME(dev), can_id, mask);
pr_warn("can: receive list entry not found for dev %s, id %03X, mask %03X\n",
DNAME(dev), can_id, mask);
goto out;
}

View File

@ -517,7 +517,7 @@ devlink_reload_limit_is_supported(struct devlink *devlink, enum devlink_reload_l
return test_bit(limit, &devlink->ops->reload_limits);
}
static int devlink_reload_stat_put(struct sk_buff *msg, enum devlink_reload_action action,
static int devlink_reload_stat_put(struct sk_buff *msg,
enum devlink_reload_limit limit, u32 value)
{
struct nlattr *reload_stats_entry;
@ -526,8 +526,7 @@ static int devlink_reload_stat_put(struct sk_buff *msg, enum devlink_reload_acti
if (!reload_stats_entry)
return -EMSGSIZE;
if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_ACTION, action) ||
nla_put_u8(msg, DEVLINK_ATTR_RELOAD_STATS_LIMIT, limit) ||
if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_STATS_LIMIT, limit) ||
nla_put_u32(msg, DEVLINK_ATTR_RELOAD_STATS_VALUE, value))
goto nla_put_failure;
nla_nest_end(msg, reload_stats_entry);
@ -540,7 +539,7 @@ nla_put_failure:
static int devlink_reload_stats_put(struct sk_buff *msg, struct devlink *devlink, bool is_remote)
{
struct nlattr *reload_stats_attr;
struct nlattr *reload_stats_attr, *act_info, *act_stats;
int i, j, stat_idx;
u32 value;
@ -552,17 +551,29 @@ static int devlink_reload_stats_put(struct sk_buff *msg, struct devlink *devlink
if (!reload_stats_attr)
return -EMSGSIZE;
for (j = 0; j <= DEVLINK_RELOAD_LIMIT_MAX; j++) {
/* Remote stats are shown even if not locally supported. Stats
* of actions with unspecified limit are shown though drivers
* don't need to register unspecified limit.
*/
if (!is_remote && j != DEVLINK_RELOAD_LIMIT_UNSPEC &&
!devlink_reload_limit_is_supported(devlink, j))
for (i = 0; i <= DEVLINK_RELOAD_ACTION_MAX; i++) {
if ((!is_remote &&
!devlink_reload_action_is_supported(devlink, i)) ||
i == DEVLINK_RELOAD_ACTION_UNSPEC)
continue;
for (i = 0; i <= DEVLINK_RELOAD_ACTION_MAX; i++) {
if ((!is_remote && !devlink_reload_action_is_supported(devlink, i)) ||
i == DEVLINK_RELOAD_ACTION_UNSPEC ||
act_info = nla_nest_start(msg, DEVLINK_ATTR_RELOAD_ACTION_INFO);
if (!act_info)
goto nla_put_failure;
if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_ACTION, i))
goto action_info_nest_cancel;
act_stats = nla_nest_start(msg, DEVLINK_ATTR_RELOAD_ACTION_STATS);
if (!act_stats)
goto action_info_nest_cancel;
for (j = 0; j <= DEVLINK_RELOAD_LIMIT_MAX; j++) {
/* Remote stats are shown even if not locally supported.
* Stats of actions with unspecified limit are shown
* though drivers don't need to register unspecified
* limit.
*/
if ((!is_remote && j != DEVLINK_RELOAD_LIMIT_UNSPEC &&
!devlink_reload_limit_is_supported(devlink, j)) ||
devlink_reload_combination_is_invalid(i, j))
continue;
@ -571,13 +582,19 @@ static int devlink_reload_stats_put(struct sk_buff *msg, struct devlink *devlink
value = devlink->stats.reload_stats[stat_idx];
else
value = devlink->stats.remote_reload_stats[stat_idx];
if (devlink_reload_stat_put(msg, i, j, value))
goto nla_put_failure;
if (devlink_reload_stat_put(msg, j, value))
goto action_stats_nest_cancel;
}
nla_nest_end(msg, act_stats);
nla_nest_end(msg, act_info);
}
nla_nest_end(msg, reload_stats_attr);
return 0;
action_stats_nest_cancel:
nla_nest_cancel(msg, act_stats);
action_info_nest_cancel:
nla_nest_cancel(msg, act_info);
nla_put_failure:
nla_nest_cancel(msg, reload_stats_attr);
return -EMSGSIZE;
@ -755,6 +772,8 @@ static int devlink_nl_port_fill(struct sk_buff *msg, struct devlink *devlink,
if (nla_put_u32(msg, DEVLINK_ATTR_PORT_INDEX, devlink_port->index))
goto nla_put_failure;
/* Hold rtnl lock while accessing port's netdev attributes. */
rtnl_lock();
spin_lock_bh(&devlink_port->type_lock);
if (nla_put_u16(msg, DEVLINK_ATTR_PORT_TYPE, devlink_port->type))
goto nla_put_failure_type_locked;
@ -763,9 +782,10 @@ static int devlink_nl_port_fill(struct sk_buff *msg, struct devlink *devlink,
devlink_port->desired_type))
goto nla_put_failure_type_locked;
if (devlink_port->type == DEVLINK_PORT_TYPE_ETH) {
struct net *net = devlink_net(devlink_port->devlink);
struct net_device *netdev = devlink_port->type_dev;
if (netdev &&
if (netdev && net_eq(net, dev_net(netdev)) &&
(nla_put_u32(msg, DEVLINK_ATTR_PORT_NETDEV_IFINDEX,
netdev->ifindex) ||
nla_put_string(msg, DEVLINK_ATTR_PORT_NETDEV_NAME,
@ -781,6 +801,7 @@ static int devlink_nl_port_fill(struct sk_buff *msg, struct devlink *devlink,
goto nla_put_failure_type_locked;
}
spin_unlock_bh(&devlink_port->type_lock);
rtnl_unlock();
if (devlink_nl_port_attrs_put(msg, devlink_port))
goto nla_put_failure;
if (devlink_nl_port_function_attrs_put(msg, devlink_port, extack))
@ -791,6 +812,7 @@ static int devlink_nl_port_fill(struct sk_buff *msg, struct devlink *devlink,
nla_put_failure_type_locked:
spin_unlock_bh(&devlink_port->type_lock);
rtnl_unlock();
nla_put_failure:
genlmsg_cancel(msg, hdr);
return -EMSGSIZE;

View File

@ -99,9 +99,14 @@ void gro_cells_destroy(struct gro_cells *gcells)
struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
napi_disable(&cell->napi);
netif_napi_del(&cell->napi);
__netif_napi_del(&cell->napi);
__skb_queue_purge(&cell->napi_skbs);
}
/* This barrier is needed because netpoll could access dev->napi_list
* under rcu protection.
*/
synchronize_net();
free_percpu(gcells->cells);
gcells->cells = NULL;
}

View File

@ -4549,7 +4549,7 @@ struct sk_buff *sock_dequeue_err_skb(struct sock *sk)
if (skb && (skb_next = skb_peek(q))) {
icmp_next = is_icmp_err_skb(skb_next);
if (icmp_next)
sk->sk_err = SKB_EXT_ERR(skb_next)->ee.ee_origin;
sk->sk_err = SKB_EXT_ERR(skb_next)->ee.ee_errno;
}
spin_unlock_irqrestore(&q->lock, flags);

View File

@ -427,7 +427,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
if (__inet_inherit_port(sk, newsk) < 0)
goto put_and_exit;
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL);
if (*own_req)
ireq->ireq_opt = NULL;
else

View File

@ -533,7 +533,7 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
dccp_done(newsk);
goto out;
}
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL);
/* Clone pktoptions received with SYN, if we own the req */
if (*own_req && ireq->pktopts) {
newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC);

View File

@ -787,7 +787,7 @@ static void reqsk_queue_hash_req(struct request_sock *req,
timer_setup(&req->rsk_timer, reqsk_timer_handler, TIMER_PINNED);
mod_timer(&req->rsk_timer, jiffies + timeout);
inet_ehash_insert(req_to_sk(req), NULL);
inet_ehash_insert(req_to_sk(req), NULL, NULL);
/* before letting lookups find us, make sure all req fields
* are committed to memory and refcnt initialized.
*/

View File

@ -20,6 +20,9 @@
#include <net/addrconf.h>
#include <net/inet_connection_sock.h>
#include <net/inet_hashtables.h>
#if IS_ENABLED(CONFIG_IPV6)
#include <net/inet6_hashtables.h>
#endif
#include <net/secure_seq.h>
#include <net/ip.h>
#include <net/tcp.h>
@ -508,10 +511,52 @@ static u32 inet_sk_port_offset(const struct sock *sk)
inet->inet_dport);
}
/* insert a socket into ehash, and eventually remove another one
* (The another one can be a SYN_RECV or TIMEWAIT
/* Searches for an exsiting socket in the ehash bucket list.
* Returns true if found, false otherwise.
*/
bool inet_ehash_insert(struct sock *sk, struct sock *osk)
static bool inet_ehash_lookup_by_sk(struct sock *sk,
struct hlist_nulls_head *list)
{
const __portpair ports = INET_COMBINED_PORTS(sk->sk_dport, sk->sk_num);
const int sdif = sk->sk_bound_dev_if;
const int dif = sk->sk_bound_dev_if;
const struct hlist_nulls_node *node;
struct net *net = sock_net(sk);
struct sock *esk;
INET_ADDR_COOKIE(acookie, sk->sk_daddr, sk->sk_rcv_saddr);
sk_nulls_for_each_rcu(esk, node, list) {
if (esk->sk_hash != sk->sk_hash)
continue;
if (sk->sk_family == AF_INET) {
if (unlikely(INET_MATCH(esk, net, acookie,
sk->sk_daddr,
sk->sk_rcv_saddr,
ports, dif, sdif))) {
return true;
}
}
#if IS_ENABLED(CONFIG_IPV6)
else if (sk->sk_family == AF_INET6) {
if (unlikely(INET6_MATCH(esk, net,
&sk->sk_v6_daddr,
&sk->sk_v6_rcv_saddr,
ports, dif, sdif))) {
return true;
}
}
#endif
}
return false;
}
/* Insert a socket into ehash, and eventually remove another one
* (The another one can be a SYN_RECV or TIMEWAIT)
* If an existing socket already exists, socket sk is not inserted,
* and sets found_dup_sk parameter to true.
*/
bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
{
struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
struct hlist_nulls_head *list;
@ -530,16 +575,23 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk)
if (osk) {
WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
ret = sk_nulls_del_node_init_rcu(osk);
} else if (found_dup_sk) {
*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
if (*found_dup_sk)
ret = false;
}
if (ret)
__sk_nulls_add_node_rcu(sk, list);
spin_unlock(lock);
return ret;
}
bool inet_ehash_nolisten(struct sock *sk, struct sock *osk)
bool inet_ehash_nolisten(struct sock *sk, struct sock *osk, bool *found_dup_sk)
{
bool ok = inet_ehash_insert(sk, osk);
bool ok = inet_ehash_insert(sk, osk, found_dup_sk);
if (ok) {
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
@ -583,7 +635,7 @@ int __inet_hash(struct sock *sk, struct sock *osk)
int err = 0;
if (sk->sk_state != TCP_LISTEN) {
inet_ehash_nolisten(sk, osk);
inet_ehash_nolisten(sk, osk, NULL);
return 0;
}
WARN_ON(!sk_unhashed(sk));
@ -679,7 +731,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
tb = inet_csk(sk)->icsk_bind_hash;
spin_lock_bh(&head->lock);
if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) {
inet_ehash_nolisten(sk, NULL);
inet_ehash_nolisten(sk, NULL, NULL);
spin_unlock_bh(&head->lock);
return 0;
}
@ -758,7 +810,7 @@ ok:
inet_bind_hash(sk, tb, port);
if (sk_unhashed(sk)) {
inet_sk(sk)->inet_sport = htons(port);
inet_ehash_nolisten(sk, (struct sock *)tw);
inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
}
if (tw)
inet_twsk_bind_unhash(tw, hinfo);

View File

@ -198,6 +198,11 @@ static void tcp_reinit_congestion_control(struct sock *sk,
icsk->icsk_ca_setsockopt = 1;
memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
if (ca->flags & TCP_CONG_NEEDS_ECN)
INET_ECN_xmit(sk);
else
INET_ECN_dontxmit(sk);
if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
tcp_init_congestion_control(sk);
}

View File

@ -980,17 +980,22 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
skb = tcp_make_synack(sk, dst, req, foc, synack_type, syn_skb);
tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
tcp_rsk(req)->syn_tos : inet_sk(sk)->tos;
if (skb) {
__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
tcp_rsk(req)->syn_tos & ~INET_ECN_MASK :
inet_sk(sk)->tos;
if (!INET_ECN_is_capable(tos) &&
tcp_bpf_ca_needs_ecn((struct sock *)req))
tos |= INET_ECN_ECT_0;
rcu_read_lock();
err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
ireq->ir_rmt_addr,
rcu_dereference(ireq->ireq_opt),
tos & ~INET_ECN_MASK);
tos);
rcu_read_unlock();
err = net_xmit_eval(err);
}
@ -1498,6 +1503,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
bool *own_req)
{
struct inet_request_sock *ireq;
bool found_dup_sk = false;
struct inet_sock *newinet;
struct tcp_sock *newtp;
struct sock *newsk;
@ -1575,12 +1581,22 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
if (__inet_inherit_port(sk, newsk) < 0)
goto put_and_exit;
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash),
&found_dup_sk);
if (likely(*own_req)) {
tcp_move_syn(newtp, req);
ireq->ireq_opt = NULL;
} else {
newinet->inet_opt = NULL;
if (!req_unhash && found_dup_sk) {
/* This code path should only be executed in the
* syncookie case only
*/
bh_unlock_sock(newsk);
sock_put(newsk);
newsk = NULL;
} else {
newinet->inet_opt = NULL;
}
}
return newsk;

View File

@ -306,7 +306,9 @@ static int ip6addrlbl_del(struct net *net,
/* add default label */
static int __net_init ip6addrlbl_net_init(struct net *net)
{
int err = 0;
struct ip6addrlbl_entry *p = NULL;
struct hlist_node *n;
int err;
int i;
ADDRLABEL(KERN_DEBUG "%s\n", __func__);
@ -315,14 +317,20 @@ static int __net_init ip6addrlbl_net_init(struct net *net)
INIT_HLIST_HEAD(&net->ipv6.ip6addrlbl_table.head);
for (i = 0; i < ARRAY_SIZE(ip6addrlbl_init_table); i++) {
int ret = ip6addrlbl_add(net,
ip6addrlbl_init_table[i].prefix,
ip6addrlbl_init_table[i].prefixlen,
0,
ip6addrlbl_init_table[i].label, 0);
/* XXX: should we free all rules when we catch an error? */
if (ret && (!err || err != -ENOMEM))
err = ret;
err = ip6addrlbl_add(net,
ip6addrlbl_init_table[i].prefix,
ip6addrlbl_init_table[i].prefixlen,
0,
ip6addrlbl_init_table[i].label, 0);
if (err)
goto err_ip6addrlbl_add;
}
return 0;
err_ip6addrlbl_add:
hlist_for_each_entry_safe(p, n, &net->ipv6.ip6addrlbl_table.head, list) {
hlist_del_rcu(&p->list);
kfree_rcu(p, rcu);
}
return err;
}

View File

@ -527,15 +527,20 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
if (np->repflow && ireq->pktopts)
fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
tcp_rsk(req)->syn_tos & ~INET_ECN_MASK :
np->tclass;
if (!INET_ECN_is_capable(tclass) &&
tcp_bpf_ca_needs_ecn((struct sock *)req))
tclass |= INET_ECN_ECT_0;
rcu_read_lock();
opt = ireq->ipv6_opt;
tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
tcp_rsk(req)->syn_tos : np->tclass;
if (!opt)
opt = rcu_dereference(np->opt);
err = ip6_xmit(sk, skb, fl6, sk->sk_mark, opt,
tclass & ~INET_ECN_MASK,
sk->sk_priority);
tclass, sk->sk_priority);
rcu_read_unlock();
err = net_xmit_eval(err);
}
@ -1193,6 +1198,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
const struct ipv6_pinfo *np = tcp_inet6_sk(sk);
struct ipv6_txoptions *opt;
struct inet_sock *newinet;
bool found_dup_sk = false;
struct tcp_sock *newtp;
struct sock *newsk;
#ifdef CONFIG_TCP_MD5SIG
@ -1368,7 +1374,8 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
tcp_done(newsk);
goto out;
}
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash),
&found_dup_sk);
if (*own_req) {
tcp_move_syn(newtp, req);
@ -1383,6 +1390,15 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
skb_set_owner_r(newnp->pktoptions, newsk);
}
}
} else {
if (!req_unhash && found_dup_sk) {
/* This code path should only be executed in the
* syncookie case only
*/
bh_unlock_sock(newsk);
sock_put(newsk);
newsk = NULL;
}
}
return newsk;

View File

@ -1645,7 +1645,7 @@ static int iucv_callback_connreq(struct iucv_path *path,
}
/* Create the new socket */
nsk = iucv_sock_alloc(NULL, sk->sk_type, GFP_ATOMIC, 0);
nsk = iucv_sock_alloc(NULL, sk->sk_protocol, GFP_ATOMIC, 0);
if (!nsk) {
err = pr_iucv->path_sever(path, user_data);
iucv_path_free(path);
@ -1851,7 +1851,7 @@ static int afiucv_hs_callback_syn(struct sock *sk, struct sk_buff *skb)
goto out;
}
nsk = iucv_sock_alloc(NULL, sk->sk_type, GFP_ATOMIC, 0);
nsk = iucv_sock_alloc(NULL, sk->sk_protocol, GFP_ATOMIC, 0);
bh_lock_sock(sk);
if ((sk->sk_state != IUCV_LISTEN) ||
sk_acceptq_is_full(sk) ||

View File

@ -543,9 +543,8 @@ create_msk:
fallback = true;
} else if (subflow_req->mp_join) {
mptcp_get_options(skb, &mp_opt);
if (!mp_opt.mp_join ||
!mptcp_can_accept_new_subflow(subflow_req->msk) ||
!subflow_hmac_valid(req, &mp_opt)) {
if (!mp_opt.mp_join || !subflow_hmac_valid(req, &mp_opt) ||
!mptcp_can_accept_new_subflow(subflow_req->msk)) {
SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
fallback = true;
}

View File

@ -958,14 +958,13 @@ static int dec_ttl_exception_handler(struct datapath *dp, struct sk_buff *skb,
{
/* The first action is always 'OVS_DEC_TTL_ATTR_ARG'. */
struct nlattr *dec_ttl_arg = nla_data(attr);
int rem = nla_len(attr);
if (nla_len(dec_ttl_arg)) {
struct nlattr *actions = nla_next(dec_ttl_arg, &rem);
struct nlattr *actions = nla_data(dec_ttl_arg);
if (actions)
return clone_execute(dp, skb, key, 0, actions, rem,
last, false);
return clone_execute(dp, skb, key, 0, nla_data(actions),
nla_len(actions), last, false);
}
consume_skb(skb);
return 0;

View File

@ -2503,28 +2503,42 @@ static int validate_and_copy_dec_ttl(struct net *net,
__be16 eth_type, __be16 vlan_tci,
u32 mpls_label_count, bool log)
{
int start, err;
u32 nested = true;
const struct nlattr *attrs[OVS_DEC_TTL_ATTR_MAX + 1];
int start, action_start, err, rem;
const struct nlattr *a, *actions;
if (!nla_len(attr))
return ovs_nla_add_action(sfa, OVS_ACTION_ATTR_DEC_TTL,
NULL, 0, log);
memset(attrs, 0, sizeof(attrs));
nla_for_each_nested(a, attr, rem) {
int type = nla_type(a);
/* Ignore unknown attributes to be future proof. */
if (type > OVS_DEC_TTL_ATTR_MAX)
continue;
if (!type || attrs[type])
return -EINVAL;
attrs[type] = a;
}
actions = attrs[OVS_DEC_TTL_ATTR_ACTION];
if (rem || !actions || (nla_len(actions) && nla_len(actions) < NLA_HDRLEN))
return -EINVAL;
start = add_nested_action_start(sfa, OVS_ACTION_ATTR_DEC_TTL, log);
if (start < 0)
return start;
err = ovs_nla_add_action(sfa, OVS_DEC_TTL_ATTR_ACTION, &nested,
sizeof(nested), log);
action_start = add_nested_action_start(sfa, OVS_DEC_TTL_ATTR_ACTION, log);
if (action_start < 0)
return start;
if (err)
return err;
err = __ovs_nla_copy_actions(net, attr, key, sfa, eth_type,
err = __ovs_nla_copy_actions(net, actions, key, sfa, eth_type,
vlan_tci, mpls_label_count, log);
if (err)
return err;
add_nested_action_end(*sfa, action_start);
add_nested_action_end(*sfa, start);
return 0;
}
@ -3487,20 +3501,42 @@ out:
static int dec_ttl_action_to_attr(const struct nlattr *attr,
struct sk_buff *skb)
{
int err = 0, rem = nla_len(attr);
struct nlattr *start;
struct nlattr *start, *action_start;
const struct nlattr *a;
int err = 0, rem;
start = nla_nest_start_noflag(skb, OVS_ACTION_ATTR_DEC_TTL);
if (!start)
return -EMSGSIZE;
err = ovs_nla_put_actions(nla_data(attr), rem, skb);
if (err)
nla_nest_cancel(skb, start);
else
nla_nest_end(skb, start);
nla_for_each_attr(a, nla_data(attr), nla_len(attr), rem) {
switch (nla_type(a)) {
case OVS_DEC_TTL_ATTR_ACTION:
action_start = nla_nest_start_noflag(skb, OVS_DEC_TTL_ATTR_ACTION);
if (!action_start) {
err = -EMSGSIZE;
goto out;
}
err = ovs_nla_put_actions(nla_data(a), nla_len(a), skb);
if (err)
goto out;
nla_nest_end(skb, action_start);
break;
default:
/* Ignore all other option to be future compatible */
break;
}
}
nla_nest_end(skb, start);
return 0;
out:
nla_nest_cancel(skb, start);
return err;
}

View File

@ -93,8 +93,8 @@
/*
Assumptions:
- If the device has no dev->header_ops, there is no LL header visible
above the device. In this case, its hard_header_len should be 0.
- If the device has no dev->header_ops->create, there is no LL header
visible above the device. In this case, its hard_header_len should be 0.
The device may prepend its own header internally. In this case, its
needed_headroom should be set to the space needed for it to add its
internal header.
@ -108,26 +108,26 @@
On receive:
-----------
Incoming, dev->header_ops != NULL
Incoming, dev_has_header(dev) == true
mac_header -> ll header
data -> data
Outgoing, dev->header_ops != NULL
Outgoing, dev_has_header(dev) == true
mac_header -> ll header
data -> ll header
Incoming, dev->header_ops == NULL
Incoming, dev_has_header(dev) == false
mac_header -> data
However drivers often make it point to the ll header.
This is incorrect because the ll header should be invisible to us.
data -> data
Outgoing, dev->header_ops == NULL
Outgoing, dev_has_header(dev) == false
mac_header -> data. ll header is invisible to us.
data -> data
Resume
If dev->header_ops == NULL we are unable to restore the ll header,
If dev_has_header(dev) == false we are unable to restore the ll header,
because it is invisible to us.
@ -2069,7 +2069,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
skb->dev = dev;
if (dev->header_ops) {
if (dev_has_header(dev)) {
/* The device has an explicit notion of ll header,
* exported to higher levels.
*
@ -2198,7 +2198,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
if (!net_eq(dev_net(dev), sock_net(sk)))
goto drop;
if (dev->header_ops) {
if (dev_has_header(dev)) {
if (sk->sk_type != SOCK_DGRAM)
skb_push(skb, skb->data - skb_mac_header(skb));
else if (skb->pkt_type == PACKET_OUTGOING) {

View File

@ -96,10 +96,19 @@ static void rose_loopback_timer(struct timer_list *unused)
}
if (frametype == ROSE_CALL_REQUEST) {
if ((dev = rose_dev_get(dest)) != NULL) {
if (rose_rx_call_request(skb, dev, rose_loopback_neigh, lci_o) == 0)
kfree_skb(skb);
} else {
if (!rose_loopback_neigh->dev) {
kfree_skb(skb);
continue;
}
dev = rose_dev_get(dest);
if (!dev) {
kfree_skb(skb);
continue;
}
if (rose_rx_call_request(skb, dev, rose_loopback_neigh, lci_o) == 0) {
dev_put(dev);
kfree_skb(skb);
}
} else {

View File

@ -1262,6 +1262,8 @@ void tls_device_offload_cleanup_rx(struct sock *sk)
if (tls_ctx->tx_conf != TLS_HW) {
dev_put(netdev);
tls_ctx->netdev = NULL;
} else {
set_bit(TLS_RX_DEV_CLOSED, &tls_ctx->flags);
}
out:
up_read(&device_offload_lock);
@ -1291,7 +1293,8 @@ static int tls_device_down(struct net_device *netdev)
if (ctx->tx_conf == TLS_HW)
netdev->tlsdev_ops->tls_dev_del(netdev, ctx,
TLS_OFFLOAD_CTX_DIR_TX);
if (ctx->rx_conf == TLS_HW)
if (ctx->rx_conf == TLS_HW &&
!test_bit(TLS_RX_DEV_CLOSED, &ctx->flags))
netdev->tlsdev_ops->tls_dev_del(netdev, ctx,
TLS_OFFLOAD_CTX_DIR_RX);
WRITE_ONCE(ctx->netdev, NULL);

View File

@ -1295,6 +1295,12 @@ static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
return NULL;
}
if (!skb_queue_empty(&sk->sk_receive_queue)) {
__strp_unpause(&ctx->strp);
if (ctx->recv_pkt)
return ctx->recv_pkt;
}
if (sk->sk_shutdown & RCV_SHUTDOWN)
return NULL;

View File

@ -841,8 +841,10 @@ void virtio_transport_release(struct vsock_sock *vsk)
virtio_transport_free_pkt(pkt);
}
if (remove_sock)
if (remove_sock) {
sock_set_flag(sk, SOCK_DONE);
vsock_remove_sock(vsk);
}
}
EXPORT_SYMBOL_GPL(virtio_transport_release);
@ -1132,8 +1134,8 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
lock_sock(sk);
/* Check if sk has been released before lock_sock */
if (sk->sk_shutdown == SHUTDOWN_MASK) {
/* Check if sk has been closed before lock_sock */
if (sock_flag(sk, SOCK_DONE)) {
(void)virtio_transport_reset_no_sock(t, pkt);
release_sock(sk);
sock_put(sk);