bluetooth-next pull request for net-next:

- Add support MediaTek MT7921S SDIO
  - Various fixes for -Wflex-array-member-not-at-end and -Wfamnae
  - Add USB HW IDs for MT7921/MT7922/MT7925
  - Add support for Intel BlazarI and Filmore Peak2 (BE201)
  - Add initial support for Intel PCIe driver
  - Remove HCI_AMP support
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE7E6oRXp8w05ovYr/9JCA4xAyCykFAmZDfKAZHGx1aXoudm9u
 LmRlbnR6QGludGVsLmNvbQAKCRD0kIDjEDILKSqYD/93GnwbcF3dVICJ06uwt54c
 WhhVXOfb8u7uoUT5a01Bz1s4WPfVH5yiv2JhjFs1kppcfQV6mNvWJWk0Lp9WucSg
 XyXA3lZMNiOWcG13A3P8K5GENgJeUjGatcE1OUrOxnDdnmX9CZZVQ5HSgKDaOEC3
 C5OzeM+yfXLo+hVI4NWfoiD7xbXv1vSVk/7K9PsxtNlmI/rMsWC5j2w1pUXOWy39
 jcdeJ1Vjdoeky0Wvfizw2nT6M9o9uheNuqUCH6LywrRBXz8c2vcO4kO6XZbMk3bi
 5pHYGKeOy7pUg49KGreVoujsPljnUhwj9MaH2FhTUEVOKvRsLMoYGXZVaxSDI3zK
 Rx7VacQt0AIGnFHo25wCZGUnGd7xHPZ/Ics/YCazU9vJgdVik0DpmCWWCwFc45Eu
 Y4ZeNoCdRDsNHPk85qkb/mxwcsB2Cz0RHU49gNxahXeT3vbHu3IRIomFY/zbcMRC
 nxhhi75KkWiWyupBNJLo/q0nemeWSMuSFKnHygEENIMGUVhwmeIcocFWFdd9Dk6P
 V/cUGyvAx9K4fxIa047bEt33HIvN70WWqlyFnkQ5qTX9OjCT03+GaUvhuLbE1ZXW
 ERQ69Dfa4Ycujj5XVizZQyWewgokKF3EhSN8kD5cIUQFdPFgS8seSZgfnaCdqkKK
 nOowk5F+feApn8BXJZLcVw==
 =8ZcG
 -----END PGP SIGNATURE-----

Merge tag 'for-net-next-2024-05-14' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next

Luiz Augusto von Dentz says:

====================
bluetooth-next pull request for net-next:

 - Add support MediaTek MT7921S SDIO
 - Various fixes for -Wflex-array-member-not-at-end and -Wfamnae
 - Add USB HW IDs for MT7921/MT7922/MT7925
 - Add support for Intel BlazarI and Filmore Peak2 (BE201)
 - Add initial support for Intel PCIe driver
 - Remove HCI_AMP support

* tag 'for-net-next-2024-05-14' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next: (47 commits)
  Bluetooth: btintel_pcie: Refactor and code cleanup
  Bluetooth: btintel_pcie: Fix warning reported by sparse
  Bluetooth: hci_core: Fix not handling hdev->le_num_of_adv_sets=1
  Bluetooth: btintel: Fix compiler warning for multi_v7_defconfig config
  Bluetooth: btintel_pcie: Fix compiler warnings
  Bluetooth: btintel_pcie: Add *setup* function to download firmware
  Bluetooth: btintel_pcie: Add support for PCIe transport
  Bluetooth: btintel: Export few static functions
  Bluetooth: HCI: Remove HCI_AMP support
  Bluetooth: L2CAP: Fix div-by-zero in l2cap_le_flowctl_init()
  Bluetooth: qca: Fix error code in qca_read_fw_build_info()
  Bluetooth: hci_conn: Use __counted_by() and avoid -Wfamnae warning
  Bluetooth: btintel: Add support for Filmore Peak2 (BE201)
  Bluetooth: btintel: Add support for BlazarI
  LE Create Connection command timeout increased to 20 secs
  dt-bindings: net: bluetooth: Add MediaTek MT7921S SDIO Bluetooth
  Bluetooth: compute LE flow credits based on recvbuf space
  Bluetooth: hci_sync: Use cmd->num_cis instead of magic number
  Bluetooth: hci_conn: Use struct_size() in hci_le_big_create_sync()
  Bluetooth: qca: clean up defines
  ...
====================

Link: https://lore.kernel.org/r/20240514150206.606432-1-luiz.dentz@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2024-05-14 09:07:37 -07:00
commit 79982e8f8a
42 changed files with 2651 additions and 1141 deletions

View File

@ -0,0 +1,55 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/bluetooth/mediatek,mt7921s-bluetooth.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek MT7921S Bluetooth
maintainers:
- Sean Wang <sean.wang@mediatek.com>
description:
MT7921S is an SDIO-attached dual-radio WiFi+Bluetooth Combo chip; each
function is its own SDIO function on a shared SDIO interface. The chip
has two dedicated reset lines, one for each function core.
This binding only covers the Bluetooth SDIO function, with one device
node describing only this SDIO function.
allOf:
- $ref: bluetooth-controller.yaml#
properties:
compatible:
enum:
- mediatek,mt7921s-bluetooth
reg:
const: 2
reset-gpios:
maxItems: 1
description:
An active-low reset line for the Bluetooth core; on typical M.2
key E modules this is the W_DISABLE2# pin.
required:
- compatible
- reg
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
mmc {
#address-cells = <1>;
#size-cells = <0>;
bluetooth@2 {
compatible = "mediatek,mt7921s-bluetooth";
reg = <2>;
reset-gpios = <&pio 8 GPIO_ACTIVE_LOW>;
};
};

View File

@ -14,20 +14,25 @@ description:
properties:
compatible:
enum:
- brcm,bcm20702a1
- brcm,bcm4329-bt
- brcm,bcm4330-bt
- brcm,bcm4334-bt
- brcm,bcm43430a0-bt
- brcm,bcm43430a1-bt
- brcm,bcm43438-bt
- brcm,bcm4345c5
- brcm,bcm43540-bt
- brcm,bcm4335a0
- brcm,bcm4349-bt
- cypress,cyw4373a0-bt
- infineon,cyw55572-bt
oneOf:
- items:
- enum:
- infineon,cyw43439-bt
- const: brcm,bcm4329-bt
- enum:
- brcm,bcm20702a1
- brcm,bcm4329-bt
- brcm,bcm4330-bt
- brcm,bcm4334-bt
- brcm,bcm43430a0-bt
- brcm,bcm43430a1-bt
- brcm,bcm43438-bt
- brcm,bcm4345c5
- brcm,bcm43540-bt
- brcm,bcm4335a0
- brcm,bcm4349-bt
- cypress,cyw4373a0-bt
- infineon,cyw55572-bt
shutdown-gpios:
maxItems: 1

View File

@ -13763,6 +13763,7 @@ M: Sean Wang <sean.wang@mediatek.com>
L: linux-bluetooth@vger.kernel.org
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/net/bluetooth/mediatek,mt7921s-bluetooth.yaml
F: Documentation/devicetree/bindings/net/mediatek-bluetooth.txt
F: drivers/bluetooth/btmtkuart.c

View File

@ -478,5 +478,16 @@ config BT_NXPUART
Say Y here to compile support for NXP Bluetooth UART device into
the kernel, or say M here to compile as a module (btnxpuart).
config BT_INTEL_PCIE
tristate "Intel HCI PCIe driver"
depends on PCI
select BT_INTEL
select FW_LOADER
help
Intel Bluetooth transport driver for PCIe.
This driver is required if you want to use Intel Bluetooth device
with PCIe interface.
Say Y here to compiler support for Intel Bluetooth PCIe device into
the kernel or say M to compile it as module (btintel_pcie)
endmenu

View File

@ -17,6 +17,7 @@ obj-$(CONFIG_BT_HCIBTUSB) += btusb.o
obj-$(CONFIG_BT_HCIBTSDIO) += btsdio.o
obj-$(CONFIG_BT_INTEL) += btintel.o
obj-$(CONFIG_BT_INTEL_PCIE) += btintel_pcie.o btintel.o
obj-$(CONFIG_BT_ATH3K) += ath3k.o
obj-$(CONFIG_BT_MRVL) += btmrvl.o
obj-$(CONFIG_BT_MRVL_SDIO) += btmrvl_sdio.o

View File

@ -3,7 +3,6 @@
* Copyright (c) 2008-2009 Atheros Communications Inc.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
@ -128,7 +127,6 @@ MODULE_DEVICE_TABLE(usb, ath3k_table);
* for AR3012
*/
static const struct usb_device_id ath3k_blist_tbl[] = {
/* Atheros AR3012 with sflash firmware*/
{ USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 },
@ -202,7 +200,7 @@ static inline void ath3k_log_failed_loading(int err, int len, int size,
#define TIMEGAP_USEC_MAX 100
static int ath3k_load_firmware(struct usb_device *udev,
const struct firmware *firmware)
const struct firmware *firmware)
{
u8 *send_buf;
int len = 0;
@ -237,9 +235,9 @@ static int ath3k_load_firmware(struct usb_device *udev,
memcpy(send_buf, firmware->data + sent, size);
err = usb_bulk_msg(udev, pipe, send_buf, size,
&len, 3000);
&len, 3000);
if (err || (len != size)) {
if (err || len != size) {
ath3k_log_failed_loading(err, len, size, count);
goto error;
}
@ -262,7 +260,7 @@ static int ath3k_get_state(struct usb_device *udev, unsigned char *state)
}
static int ath3k_get_version(struct usb_device *udev,
struct ath3k_version *version)
struct ath3k_version *version)
{
return usb_control_msg_recv(udev, 0, ATH3K_GETVERSION,
USB_TYPE_VENDOR | USB_DIR_IN, 0, 0,
@ -271,7 +269,7 @@ static int ath3k_get_version(struct usb_device *udev,
}
static int ath3k_load_fwfile(struct usb_device *udev,
const struct firmware *firmware)
const struct firmware *firmware)
{
u8 *send_buf;
int len = 0;
@ -310,8 +308,8 @@ static int ath3k_load_fwfile(struct usb_device *udev,
memcpy(send_buf, firmware->data + sent, size);
err = usb_bulk_msg(udev, pipe, send_buf, size,
&len, 3000);
if (err || (len != size)) {
&len, 3000);
if (err || len != size) {
ath3k_log_failed_loading(err, len, size, count);
kfree(send_buf);
return err;
@ -425,7 +423,6 @@ static int ath3k_load_syscfg(struct usb_device *udev)
}
switch (fw_version.ref_clock) {
case ATH3K_XTAL_FREQ_26M:
clk_value = 26;
break;
@ -441,7 +438,7 @@ static int ath3k_load_syscfg(struct usb_device *udev)
}
snprintf(filename, ATH3K_NAME_LEN, "ar3k/ramps_0x%08x_%d%s",
le32_to_cpu(fw_version.rom_version), clk_value, ".dfu");
le32_to_cpu(fw_version.rom_version), clk_value, ".dfu");
ret = request_firmware(&firmware, filename, &udev->dev);
if (ret < 0) {
@ -456,7 +453,7 @@ static int ath3k_load_syscfg(struct usb_device *udev)
}
static int ath3k_probe(struct usb_interface *intf,
const struct usb_device_id *id)
const struct usb_device_id *id)
{
const struct firmware *firmware;
struct usb_device *udev = interface_to_usbdev(intf);
@ -505,10 +502,10 @@ static int ath3k_probe(struct usb_interface *intf,
if (ret < 0) {
if (ret == -ENOENT)
BT_ERR("Firmware file \"%s\" not found",
ATH3K_FIRMWARE);
ATH3K_FIRMWARE);
else
BT_ERR("Firmware file \"%s\" request failed (err=%d)",
ATH3K_FIRMWARE, ret);
ATH3K_FIRMWARE, ret);
return ret;
}

View File

@ -245,7 +245,7 @@ static int btintel_set_diag_combined(struct hci_dev *hdev, bool enable)
return ret;
}
static void btintel_hw_error(struct hci_dev *hdev, u8 code)
void btintel_hw_error(struct hci_dev *hdev, u8 code)
{
struct sk_buff *skb;
u8 type = 0x00;
@ -277,6 +277,7 @@ static void btintel_hw_error(struct hci_dev *hdev, u8 code)
kfree_skb(skb);
}
EXPORT_SYMBOL_GPL(btintel_hw_error);
int btintel_version_info(struct hci_dev *hdev, struct intel_version *ver)
{
@ -455,8 +456,8 @@ int btintel_read_version(struct hci_dev *hdev, struct intel_version *ver)
}
EXPORT_SYMBOL_GPL(btintel_read_version);
static int btintel_version_info_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version)
int btintel_version_info_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version)
{
const char *variant;
@ -481,6 +482,7 @@ static int btintel_version_info_tlv(struct hci_dev *hdev,
case 0x19: /* Slr-F */
case 0x1b: /* Mgr */
case 0x1c: /* Gale Peak (GaP) */
case 0x1e: /* BlazarI (Bzr) */
break;
default:
bt_dev_err(hdev, "Unsupported Intel hardware variant (0x%x)",
@ -489,7 +491,7 @@ static int btintel_version_info_tlv(struct hci_dev *hdev,
}
switch (version->img_type) {
case 0x01:
case BTINTEL_IMG_BOOTLOADER:
variant = "Bootloader";
/* It is required that every single firmware fragment is acknowledged
* with a command complete event. If the boot parameters indicate
@ -521,7 +523,10 @@ static int btintel_version_info_tlv(struct hci_dev *hdev,
version->min_fw_build_nn, version->min_fw_build_cw,
2000 + version->min_fw_build_yy);
break;
case 0x03:
case BTINTEL_IMG_IML:
variant = "Intermediate loader";
break;
case BTINTEL_IMG_OP:
variant = "Firmware";
break;
default:
@ -535,15 +540,16 @@ static int btintel_version_info_tlv(struct hci_dev *hdev,
bt_dev_info(hdev, "%s timestamp %u.%u buildtype %u build %u", variant,
2000 + (version->timestamp >> 8), version->timestamp & 0xff,
version->build_type, version->build_num);
if (version->img_type == 0x03)
if (version->img_type == BTINTEL_IMG_OP)
bt_dev_info(hdev, "Firmware SHA1: 0x%8.8x", version->git_sha1);
return 0;
}
EXPORT_SYMBOL_GPL(btintel_version_info_tlv);
static int btintel_parse_version_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version,
struct sk_buff *skb)
int btintel_parse_version_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version,
struct sk_buff *skb)
{
/* Consume Command Complete Status field */
skb_pull(skb, 1);
@ -645,6 +651,7 @@ static int btintel_parse_version_tlv(struct hci_dev *hdev,
return 0;
}
EXPORT_SYMBOL_GPL(btintel_parse_version_tlv);
static int btintel_read_version_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version)
@ -1172,7 +1179,7 @@ static int btintel_download_fw_tlv(struct hci_dev *hdev,
* If the firmware version has changed that means it needs to be reset
* to bootloader when operational so the new firmware can be loaded.
*/
if (ver->img_type == 0x03)
if (ver->img_type == BTINTEL_IMG_OP)
return -EINVAL;
/* iBT hardware variants 0x0b, 0x0c, 0x11, 0x12, 0x13, 0x14 support
@ -2194,10 +2201,26 @@ static void btintel_get_fw_name_tlv(const struct intel_version_tlv *ver,
char *fw_name, size_t len,
const char *suffix)
{
const char *format;
/* The firmware file name for new generation controllers will be
* ibt-<cnvi_top type+cnvi_top step>-<cnvr_top type+cnvr_top step>
*/
snprintf(fw_name, len, "intel/ibt-%04x-%04x.%s",
switch (ver->cnvi_top & 0xfff) {
/* Only Blazar product supports downloading of intermediate loader
* image
*/
case BTINTEL_CNVI_BLAZARI:
if (ver->img_type == BTINTEL_IMG_BOOTLOADER)
format = "intel/ibt-%04x-%04x-iml.%s";
else
format = "intel/ibt-%04x-%04x.%s";
break;
default:
format = "intel/ibt-%04x-%04x.%s";
break;
}
snprintf(fw_name, len, format,
INTEL_CNVX_TOP_PACK_SWAB(INTEL_CNVX_TOP_TYPE(ver->cnvi_top),
INTEL_CNVX_TOP_STEP(ver->cnvi_top)),
INTEL_CNVX_TOP_PACK_SWAB(INTEL_CNVX_TOP_TYPE(ver->cnvr_top),
@ -2230,7 +2253,7 @@ static int btintel_prepare_fw_download_tlv(struct hci_dev *hdev,
* It is not possible to use the Secure Boot Parameters in this
* case since that command is only available in bootloader mode.
*/
if (ver->img_type == 0x03) {
if (ver->img_type == BTINTEL_IMG_OP) {
btintel_clear_flag(hdev, INTEL_BOOTLOADER);
btintel_check_bdaddr(hdev);
} else {
@ -2577,8 +2600,8 @@ static void btintel_set_dsm_reset_method(struct hci_dev *hdev,
data->acpi_reset_method = btintel_acpi_reset_method;
}
static int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
struct intel_version_tlv *ver)
int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
struct intel_version_tlv *ver)
{
u32 boot_param;
char ddcname[64];
@ -2600,13 +2623,30 @@ static int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
return err;
/* check if controller is already having an operational firmware */
if (ver->img_type == 0x03)
if (ver->img_type == BTINTEL_IMG_OP)
goto finish;
err = btintel_boot(hdev, boot_param);
if (err)
return err;
err = btintel_read_version_tlv(hdev, ver);
if (err)
return err;
/* If image type returned is BTINTEL_IMG_IML, then controller supports
* intermediae loader image
*/
if (ver->img_type == BTINTEL_IMG_IML) {
err = btintel_prepare_fw_download_tlv(hdev, ver, &boot_param);
if (err)
return err;
err = btintel_boot(hdev, boot_param);
if (err)
return err;
}
btintel_clear_flag(hdev, INTEL_BOOTLOADER);
btintel_get_fw_name_tlv(ver, ddcname, sizeof(ddcname), "ddc");
@ -2645,8 +2685,9 @@ finish:
return 0;
}
EXPORT_SYMBOL_GPL(btintel_bootloader_setup_tlv);
static void btintel_set_msft_opcode(struct hci_dev *hdev, u8 hw_variant)
void btintel_set_msft_opcode(struct hci_dev *hdev, u8 hw_variant)
{
switch (hw_variant) {
/* Legacy bootloader devices that supports MSFT Extension */
@ -2662,6 +2703,7 @@ static void btintel_set_msft_opcode(struct hci_dev *hdev, u8 hw_variant)
case 0x19:
case 0x1b:
case 0x1c:
case 0x1e:
hci_set_msft_opcode(hdev, 0xFC1E);
break;
default:
@ -2669,6 +2711,7 @@ static void btintel_set_msft_opcode(struct hci_dev *hdev, u8 hw_variant)
break;
}
}
EXPORT_SYMBOL_GPL(btintel_set_msft_opcode);
static void btintel_print_fseq_info(struct hci_dev *hdev)
{
@ -2920,6 +2963,11 @@ static int btintel_setup_combined(struct hci_dev *hdev)
err = -EINVAL;
}
hci_set_hw_info(hdev,
"INTEL platform=%u variant=%u revision=%u",
ver.hw_platform, ver.hw_variant,
ver.hw_revision);
goto exit_error;
}
@ -2996,6 +3044,7 @@ static int btintel_setup_combined(struct hci_dev *hdev)
case 0x19:
case 0x1b:
case 0x1c:
case 0x1e:
/* Display version information of TLV type */
btintel_version_info_tlv(hdev, &ver_tlv);
@ -3024,13 +3073,17 @@ static int btintel_setup_combined(struct hci_dev *hdev)
break;
}
hci_set_hw_info(hdev, "INTEL platform=%u variant=%u",
INTEL_HW_PLATFORM(ver_tlv.cnvi_bt),
INTEL_HW_VARIANT(ver_tlv.cnvi_bt));
exit_error:
kfree_skb(skb);
return err;
}
static int btintel_shutdown_combined(struct hci_dev *hdev)
int btintel_shutdown_combined(struct hci_dev *hdev)
{
struct sk_buff *skb;
int ret;
@ -3064,6 +3117,7 @@ static int btintel_shutdown_combined(struct hci_dev *hdev)
return 0;
}
EXPORT_SYMBOL_GPL(btintel_shutdown_combined);
int btintel_configure_setup(struct hci_dev *hdev, const char *driver_name)
{

View File

@ -51,6 +51,12 @@ struct intel_tlv {
u8 val[];
} __packed;
#define BTINTEL_CNVI_BLAZARI 0x900
#define BTINTEL_IMG_BOOTLOADER 0x01 /* Bootloader image */
#define BTINTEL_IMG_IML 0x02 /* Intermediate image */
#define BTINTEL_IMG_OP 0x03 /* Operational image */
struct intel_version_tlv {
u32 cnvi_top;
u32 cnvr_top;
@ -203,7 +209,7 @@ struct btintel_data {
#define btintel_wait_on_flag_timeout(hdev, nr, m, to) \
wait_on_bit_timeout(btintel_get_flag(hdev), (nr), m, to)
#if IS_ENABLED(CONFIG_BT_INTEL)
#if IS_ENABLED(CONFIG_BT_INTEL) || IS_ENABLED(CONFIG_BT_INTEL_PCIE)
int btintel_check_bdaddr(struct hci_dev *hdev);
int btintel_enter_mfg(struct hci_dev *hdev);
@ -228,6 +234,16 @@ void btintel_bootup(struct hci_dev *hdev, const void *ptr, unsigned int len);
void btintel_secure_send_result(struct hci_dev *hdev,
const void *ptr, unsigned int len);
int btintel_set_quality_report(struct hci_dev *hdev, bool enable);
int btintel_version_info_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version);
int btintel_parse_version_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version,
struct sk_buff *skb);
void btintel_set_msft_opcode(struct hci_dev *hdev, u8 hw_variant);
int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
struct intel_version_tlv *ver);
int btintel_shutdown_combined(struct hci_dev *hdev);
void btintel_hw_error(struct hci_dev *hdev, u8 code);
#else
static inline int btintel_check_bdaddr(struct hci_dev *hdev)
@ -324,4 +340,37 @@ static inline int btintel_set_quality_report(struct hci_dev *hdev, bool enable)
{
return -ENODEV;
}
static inline int btintel_version_info_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version)
{
return -EOPNOTSUPP;
}
static inline int btintel_parse_version_tlv(struct hci_dev *hdev,
struct intel_version_tlv *version,
struct sk_buff *skb)
{
return -EOPNOTSUPP;
}
static inline void btintel_set_msft_opcode(struct hci_dev *hdev, u8 hw_variant)
{
}
static inline int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
struct intel_version_tlv *ver)
{
return -ENODEV;
}
static inline int btintel_shutdown_combined(struct hci_dev *hdev)
{
return -ENODEV;
}
static inline void btintel_hw_error(struct hci_dev *hdev, u8 code)
{
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,430 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
*
* Bluetooth support for Intel PCIe devices
*
* Copyright (C) 2024 Intel Corporation
*/
/* Control and Status Register(BTINTEL_PCIE_CSR) */
#define BTINTEL_PCIE_CSR_BASE (0x000)
#define BTINTEL_PCIE_CSR_FUNC_CTRL_REG (BTINTEL_PCIE_CSR_BASE + 0x024)
#define BTINTEL_PCIE_CSR_HW_REV_REG (BTINTEL_PCIE_CSR_BASE + 0x028)
#define BTINTEL_PCIE_CSR_RF_ID_REG (BTINTEL_PCIE_CSR_BASE + 0x09C)
#define BTINTEL_PCIE_CSR_BOOT_STAGE_REG (BTINTEL_PCIE_CSR_BASE + 0x108)
#define BTINTEL_PCIE_CSR_CI_ADDR_LSB_REG (BTINTEL_PCIE_CSR_BASE + 0x118)
#define BTINTEL_PCIE_CSR_CI_ADDR_MSB_REG (BTINTEL_PCIE_CSR_BASE + 0x11C)
#define BTINTEL_PCIE_CSR_IMG_RESPONSE_REG (BTINTEL_PCIE_CSR_BASE + 0x12C)
#define BTINTEL_PCIE_CSR_HBUS_TARG_WRPTR (BTINTEL_PCIE_CSR_BASE + 0x460)
/* BTINTEL_PCIE_CSR Function Control Register */
#define BTINTEL_PCIE_CSR_FUNC_CTRL_FUNC_ENA (BIT(0))
#define BTINTEL_PCIE_CSR_FUNC_CTRL_MAC_INIT (BIT(6))
#define BTINTEL_PCIE_CSR_FUNC_CTRL_FUNC_INIT (BIT(7))
#define BTINTEL_PCIE_CSR_FUNC_CTRL_MAC_ACCESS_STS (BIT(20))
#define BTINTEL_PCIE_CSR_FUNC_CTRL_SW_RESET (BIT(31))
/* Value for BTINTEL_PCIE_CSR_BOOT_STAGE register */
#define BTINTEL_PCIE_CSR_BOOT_STAGE_ROM (BIT(0))
#define BTINTEL_PCIE_CSR_BOOT_STAGE_IML (BIT(1))
#define BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW (BIT(2))
#define BTINTEL_PCIE_CSR_BOOT_STAGE_ROM_LOCKDOWN (BIT(10))
#define BTINTEL_PCIE_CSR_BOOT_STAGE_IML_LOCKDOWN (BIT(11))
#define BTINTEL_PCIE_CSR_BOOT_STAGE_MAC_ACCESS_ON (BIT(16))
#define BTINTEL_PCIE_CSR_BOOT_STAGE_ALIVE (BIT(23))
/* Registers for MSI-X */
#define BTINTEL_PCIE_CSR_MSIX_BASE (0x2000)
#define BTINTEL_PCIE_CSR_MSIX_FH_INT_CAUSES (BTINTEL_PCIE_CSR_MSIX_BASE + 0x0800)
#define BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK (BTINTEL_PCIE_CSR_MSIX_BASE + 0x0804)
#define BTINTEL_PCIE_CSR_MSIX_HW_INT_CAUSES (BTINTEL_PCIE_CSR_MSIX_BASE + 0x0808)
#define BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK (BTINTEL_PCIE_CSR_MSIX_BASE + 0x080C)
#define BTINTEL_PCIE_CSR_MSIX_AUTOMASK_ST (BTINTEL_PCIE_CSR_MSIX_BASE + 0x0810)
#define BTINTEL_PCIE_CSR_MSIX_AUTOMASK_EN (BTINTEL_PCIE_CSR_MSIX_BASE + 0x0814)
#define BTINTEL_PCIE_CSR_MSIX_IVAR_BASE (BTINTEL_PCIE_CSR_MSIX_BASE + 0x0880)
#define BTINTEL_PCIE_CSR_MSIX_IVAR(cause) (BTINTEL_PCIE_CSR_MSIX_IVAR_BASE + (cause))
/* Causes for the FH register interrupts */
enum msix_fh_int_causes {
BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0 = BIT(0), /* cause 0 */
BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1 = BIT(1), /* cause 1 */
};
/* Causes for the HW register interrupts */
enum msix_hw_int_causes {
BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0 = BIT(0), /* cause 32 */
};
#define BTINTEL_PCIE_MSIX_NON_AUTO_CLEAR_CAUSE BIT(7)
/* Minimum and Maximum number of MSI-X Vector
* Intel Bluetooth PCIe support only 1 vector
*/
#define BTINTEL_PCIE_MSIX_VEC_MAX 1
#define BTINTEL_PCIE_MSIX_VEC_MIN 1
/* Default poll time for MAC access during init */
#define BTINTEL_DEFAULT_MAC_ACCESS_TIMEOUT_US 200000
/* Default interrupt timeout in msec */
#define BTINTEL_DEFAULT_INTR_TIMEOUT 3000
/* The number of descriptors in TX/RX queues */
#define BTINTEL_DESCS_COUNT 16
/* Number of Queue for TX and RX
* It indicates the index of the IA(Index Array)
*/
enum {
BTINTEL_PCIE_TXQ_NUM = 0,
BTINTEL_PCIE_RXQ_NUM = 1,
BTINTEL_PCIE_NUM_QUEUES = 2,
};
/* The size of DMA buffer for TX and RX in bytes */
#define BTINTEL_PCIE_BUFFER_SIZE 4096
/* DMA allocation alignment */
#define BTINTEL_PCIE_DMA_POOL_ALIGNMENT 256
#define BTINTEL_PCIE_TX_WAIT_TIMEOUT_MS 500
/* Doorbell vector for TFD */
#define BTINTEL_PCIE_TX_DB_VEC 0
/* Number of pending RX requests for downlink */
#define BTINTEL_PCIE_RX_MAX_QUEUE 6
/* Doorbell vector for FRBD */
#define BTINTEL_PCIE_RX_DB_VEC 513
/* RBD buffer size mapping */
#define BTINTEL_PCIE_RBD_SIZE_4K 0x04
/*
* Struct for Context Information (v2)
*
* All members are write-only for host and read-only for device.
*
* @version: Version of context information
* @size: Size of context information
* @config: Config with which host wants peripheral to execute
* Subset of capability register published by device
* @addr_tr_hia: Address of TR Head Index Array
* @addr_tr_tia: Address of TR Tail Index Array
* @addr_cr_hia: Address of CR Head Index Array
* @addr_cr_tia: Address of CR Tail Index Array
* @num_tr_ia: Number of entries in TR Index Arrays
* @num_cr_ia: Number of entries in CR Index Arrays
* @rbd_siz: RBD Size { 0x4=4K }
* @addr_tfdq: Address of TFD Queue(tx)
* @addr_urbdq0: Address of URBD Queue(tx)
* @num_tfdq: Number of TFD in TFD Queue(tx)
* @num_urbdq0: Number of URBD in URBD Queue(tx)
* @tfdq_db_vec: Queue number of TFD
* @urbdq0_db_vec: Queue number of URBD
* @addr_frbdq: Address of FRBD Queue(rx)
* @addr_urbdq1: Address of URBD Queue(rx)
* @num_frbdq: Number of FRBD in FRBD Queue(rx)
* @frbdq_db_vec: Queue number of FRBD
* @num_urbdq1: Number of URBD in URBD Queue(rx)
* @urbdq_db_vec: Queue number of URBDQ1
* @tr_msi_vec: Transfer Ring MSI-X Vector
* @cr_msi_vec: Completion Ring MSI-X Vector
* @dbgc_addr: DBGC first fragment address
* @dbgc_size: DBGC buffer size
* @early_enable: Enarly debug enable
* @dbg_output_mode: Debug output mode
* Bit[4] DBGC O/P { 0=SRAM, 1=DRAM(not relevant for NPK) }
* Bit[5] DBGC I/P { 0=BDBG, 1=DBGI }
* Bits[6:7] DBGI O/P(relevant if bit[5] = 1)
* 0=BT DBGC, 1=WiFi DBGC, 2=NPK }
* @dbg_preset: Debug preset
* @ext_addr: Address of context information extension
* @ext_size: Size of context information part
*
* Total 38 DWords
*/
struct ctx_info {
u16 version;
u16 size;
u32 config;
u32 reserved_dw02;
u32 reserved_dw03;
u64 addr_tr_hia;
u64 addr_tr_tia;
u64 addr_cr_hia;
u64 addr_cr_tia;
u16 num_tr_ia;
u16 num_cr_ia;
u32 rbd_size:4,
reserved_dw13:28;
u64 addr_tfdq;
u64 addr_urbdq0;
u16 num_tfdq;
u16 num_urbdq0;
u16 tfdq_db_vec;
u16 urbdq0_db_vec;
u64 addr_frbdq;
u64 addr_urbdq1;
u16 num_frbdq;
u16 frbdq_db_vec;
u16 num_urbdq1;
u16 urbdq_db_vec;
u16 tr_msi_vec;
u16 cr_msi_vec;
u32 reserved_dw27;
u64 dbgc_addr;
u32 dbgc_size;
u32 early_enable:1,
reserved_dw31:3,
dbg_output_mode:4,
dbg_preset:8,
reserved2_dw31:16;
u64 ext_addr;
u32 ext_size;
u32 test_param;
u32 reserved_dw36;
u32 reserved_dw37;
} __packed;
/* Transfer Descriptor for TX
* @type: Not in use. Set to 0x0
* @size: Size of data in the buffer
* @addr: DMA Address of buffer
*/
struct tfd {
u8 type;
u16 size;
u8 reserved;
u64 addr;
u32 reserved1;
} __packed;
/* URB Descriptor for TX
* @tfd_index: Index of TFD in TFDQ + 1
* @num_txq: Queue index of TFD Queue
* @cmpl_count: Completion count. Always 0x01
* @immediate_cmpl: Immediate completion flag: Always 0x01
*/
struct urbd0 {
u32 tfd_index:16,
num_txq:8,
cmpl_count:4,
reserved:3,
immediate_cmpl:1;
} __packed;
/* FRB Descriptor for RX
* @tag: RX buffer tag (index of RX buffer queue)
* @addr: Address of buffer
*/
struct frbd {
u32 tag:16,
reserved:16;
u32 reserved2;
u64 addr;
} __packed;
/* URB Descriptor for RX
* @frbd_tag: Tag from FRBD
* @status: Status
*/
struct urbd1 {
u32 frbd_tag:16,
status:1,
reserved:14,
fixed:1;
} __packed;
/* RFH header in RX packet
* @packet_len: Length of the data in the buffer
* @rxq: RX Queue number
* @cmd_id: Command ID. Not in Use
*/
struct rfh_hdr {
u64 packet_len:16,
rxq:6,
reserved:10,
cmd_id:16,
reserved1:16;
} __packed;
/* Internal data buffer
* @data: pointer to the data buffer
* @p_addr: physical address of data buffer
*/
struct data_buf {
u8 *data;
dma_addr_t data_p_addr;
};
/* Index Array */
struct ia {
dma_addr_t tr_hia_p_addr;
u16 *tr_hia;
dma_addr_t tr_tia_p_addr;
u16 *tr_tia;
dma_addr_t cr_hia_p_addr;
u16 *cr_hia;
dma_addr_t cr_tia_p_addr;
u16 *cr_tia;
};
/* Structure for TX Queue
* @count: Number of descriptors
* @tfds: Array of TFD
* @urbd0s: Array of URBD0
* @buf: Array of data_buf structure
*/
struct txq {
u16 count;
dma_addr_t tfds_p_addr;
struct tfd *tfds;
dma_addr_t urbd0s_p_addr;
struct urbd0 *urbd0s;
dma_addr_t buf_p_addr;
void *buf_v_addr;
struct data_buf *bufs;
};
/* Structure for RX Queue
* @count: Number of descriptors
* @frbds: Array of FRBD
* @urbd1s: Array of URBD1
* @buf: Array of data_buf structure
*/
struct rxq {
u16 count;
dma_addr_t frbds_p_addr;
struct frbd *frbds;
dma_addr_t urbd1s_p_addr;
struct urbd1 *urbd1s;
dma_addr_t buf_p_addr;
void *buf_v_addr;
struct data_buf *bufs;
};
/* struct btintel_pcie_data
* @pdev: pci device
* @hdev: hdev device
* @flags: driver state
* @irq_lock: spinlock for MSI-X
* @hci_rx_lock: spinlock for HCI RX flow
* @base_addr: pci base address (from BAR)
* @msix_entries: array of MSI-X entries
* @msix_enabled: true if MSI-X is enabled;
* @alloc_vecs: number of interrupt vectors allocated
* @def_irq: default irq for all causes
* @fh_init_mask: initial unmasked rxq causes
* @hw_init_mask: initial unmaksed hw causes
* @boot_stage_cache: cached value of boot stage register
* @img_resp_cache: cached value of image response register
* @cnvi: CNVi register value
* @cnvr: CNVr register value
* @gp0_received: condition for gp0 interrupt
* @gp0_wait_q: wait_q for gp0 interrupt
* @tx_wait_done: condition for tx interrupt
* @tx_wait_q: wait_q for tx interrupt
* @workqueue: workqueue for RX work
* @rx_skb_q: SKB queue for RX packet
* @rx_work: RX work struct to process the RX packet in @rx_skb_q
* @dma_pool: DMA pool for descriptors, index array and ci
* @dma_p_addr: DMA address for pool
* @dma_v_addr: address of pool
* @ci_p_addr: DMA address for CI struct
* @ci: CI struct
* @ia: Index Array struct
* @txq: TX Queue struct
* @rxq: RX Queue struct
*/
struct btintel_pcie_data {
struct pci_dev *pdev;
struct hci_dev *hdev;
unsigned long flags;
/* lock used in MSI-X interrupt */
spinlock_t irq_lock;
/* lock to serialize rx events */
spinlock_t hci_rx_lock;
void __iomem *base_addr;
struct msix_entry msix_entries[BTINTEL_PCIE_MSIX_VEC_MAX];
bool msix_enabled;
u32 alloc_vecs;
u32 def_irq;
u32 fh_init_mask;
u32 hw_init_mask;
u32 boot_stage_cache;
u32 img_resp_cache;
u32 cnvi;
u32 cnvr;
bool gp0_received;
wait_queue_head_t gp0_wait_q;
bool tx_wait_done;
wait_queue_head_t tx_wait_q;
struct workqueue_struct *workqueue;
struct sk_buff_head rx_skb_q;
struct work_struct rx_work;
struct dma_pool *dma_pool;
dma_addr_t dma_p_addr;
void *dma_v_addr;
dma_addr_t ci_p_addr;
struct ctx_info *ci;
struct ia ia;
struct txq txq;
struct rxq rxq;
};
static inline u32 btintel_pcie_rd_reg32(struct btintel_pcie_data *data,
u32 offset)
{
return ioread32(data->base_addr + offset);
}
static inline void btintel_pcie_wr_reg8(struct btintel_pcie_data *data,
u32 offset, u8 val)
{
iowrite8(val, data->base_addr + offset);
}
static inline void btintel_pcie_wr_reg32(struct btintel_pcie_data *data,
u32 offset, u32 val)
{
iowrite32(val, data->base_addr + offset);
}
static inline void btintel_pcie_set_reg_bits(struct btintel_pcie_data *data,
u32 offset, u32 bits)
{
u32 r;
r = ioread32(data->base_addr + offset);
r |= bits;
iowrite32(r, data->base_addr + offset);
}
static inline void btintel_pcie_clr_reg_bits(struct btintel_pcie_data *data,
u32 offset, u32 bits)
{
u32 r;
r = ioread32(data->base_addr + offset);
r &= ~bits;
iowrite32(r, data->base_addr + offset);
}

View File

@ -121,13 +121,6 @@ int btmrvl_process_event(struct btmrvl_private *priv, struct sk_buff *skb)
((event->data[2] == MODULE_BROUGHT_UP) ||
(event->data[2] == MODULE_ALREADY_UP)) ?
"Bring-up succeed" : "Bring-up failed");
if (event->length > 3 && event->data[3])
priv->btmrvl_dev.dev_type = HCI_AMP;
else
priv->btmrvl_dev.dev_type = HCI_PRIMARY;
BT_DBG("dev_type: %d", priv->btmrvl_dev.dev_type);
} else if (priv->btmrvl_dev.sendcmdflag &&
event->data[1] == MODULE_SHUTDOWN_REQ) {
BT_DBG("EVENT:%s", (event->data[2]) ?
@ -686,8 +679,6 @@ int btmrvl_register_hdev(struct btmrvl_private *priv)
hdev->wakeup = btmrvl_wakeup;
SET_HCIDEV_DEV(hdev, &card->func->dev);
hdev->dev_type = priv->btmrvl_dev.dev_type;
ret = hci_register_dev(hdev);
if (ret < 0) {
BT_ERR("Can not register HCI device");

View File

@ -13,8 +13,6 @@
#include "btqca.h"
#define VERSION "0.1"
int qca_read_soc_version(struct hci_dev *hdev, struct qca_btsoc_version *ver,
enum qca_btsoc_type soc_type)
{
@ -55,11 +53,6 @@ int qca_read_soc_version(struct hci_dev *hdev, struct qca_btsoc_version *ver,
}
edl = (struct edl_event_hdr *)(skb->data);
if (!edl) {
bt_dev_err(hdev, "QCA TLV with no header");
err = -EILSEQ;
goto out;
}
if (edl->cresp != EDL_CMD_REQ_RES_EVT ||
edl->rtype != rtype) {
@ -121,11 +114,6 @@ static int qca_read_fw_build_info(struct hci_dev *hdev)
}
edl = (struct edl_event_hdr *)(skb->data);
if (!edl) {
bt_dev_err(hdev, "QCA read fw build info with no header");
err = -EILSEQ;
goto out;
}
if (edl->cresp != EDL_CMD_REQ_RES_EVT ||
edl->rtype != EDL_GET_BUILD_INFO_CMD) {
@ -148,8 +136,10 @@ static int qca_read_fw_build_info(struct hci_dev *hdev)
}
build_label = kstrndup(&edl->data[1], build_lbl_len, GFP_KERNEL);
if (!build_label)
if (!build_label) {
err = -ENOMEM;
goto out;
}
hci_set_fw_info(hdev, "%s", build_label);
@ -183,11 +173,6 @@ static int qca_send_patch_config_cmd(struct hci_dev *hdev)
}
edl = (struct edl_event_hdr *)(skb->data);
if (!edl) {
bt_dev_err(hdev, "QCA Patch config with no header");
err = -EILSEQ;
goto out;
}
if (edl->cresp != EDL_PATCH_CONFIG_RES_EVT || edl->rtype != EDL_PATCH_CONFIG_CMD) {
bt_dev_err(hdev, "QCA Wrong packet received %d %d", edl->cresp,
@ -502,11 +487,6 @@ static int qca_tlv_send_segment(struct hci_dev *hdev, int seg_size,
}
edl = (struct edl_event_hdr *)(skb->data);
if (!edl) {
bt_dev_err(hdev, "TLV with no header");
err = -EILSEQ;
goto out;
}
if (edl->cresp != EDL_CMD_REQ_RES_EVT || edl->rtype != rtype) {
bt_dev_err(hdev, "QCA TLV with error stat 0x%x rtype 0x%x",
@ -737,6 +717,19 @@ static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size,
snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid);
}
static inline void qca_get_nvm_name_generic(struct qca_fw_config *cfg,
const char *stem, u8 rom_ver, u16 bid)
{
if (bid == 0x0)
snprintf(cfg->fwname, sizeof(cfg->fwname), "qca/%snv%02x.bin", stem, rom_ver);
else if (bid & 0xff00)
snprintf(cfg->fwname, sizeof(cfg->fwname),
"qca/%snv%02x.b%x", stem, rom_ver, bid);
else
snprintf(cfg->fwname, sizeof(cfg->fwname),
"qca/%snv%02x.b%02x", stem, rom_ver, bid);
}
int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
enum qca_btsoc_type soc_type, struct qca_btsoc_version ver,
const char *firmware_name)
@ -817,7 +810,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
/* Give the controller some time to get ready to receive the NVM */
msleep(10);
if (soc_type == QCA_QCA2066)
if (soc_type == QCA_QCA2066 || soc_type == QCA_WCN7850)
qca_read_fw_board_id(hdev, &boardid);
/* Download NVM configuration */
@ -859,8 +852,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
"qca/hpnv%02x.bin", rom_ver);
break;
case QCA_WCN7850:
snprintf(config.fwname, sizeof(config.fwname),
"qca/hmtnv%02x.bin", rom_ver);
qca_get_nvm_name_generic(&config, "hmt", rom_ver, boardid);
break;
default:
@ -961,6 +953,5 @@ EXPORT_SYMBOL_GPL(qca_set_bdaddr);
MODULE_AUTHOR("Ben Young Tae Kim <ytkim@qca.qualcomm.com>");
MODULE_DESCRIPTION("Bluetooth support for Qualcomm Atheros family ver " VERSION);
MODULE_VERSION(VERSION);
MODULE_DESCRIPTION("Bluetooth support for Qualcomm Atheros family");
MODULE_LICENSE("GPL");

View File

@ -5,33 +5,33 @@
* Copyright (c) 2015 The Linux Foundation. All rights reserved.
*/
#define EDL_PATCH_CMD_OPCODE (0xFC00)
#define EDL_NVM_ACCESS_OPCODE (0xFC0B)
#define EDL_WRITE_BD_ADDR_OPCODE (0xFC14)
#define EDL_PATCH_CMD_LEN (1)
#define EDL_PATCH_VER_REQ_CMD (0x19)
#define EDL_PATCH_TLV_REQ_CMD (0x1E)
#define EDL_GET_BUILD_INFO_CMD (0x20)
#define EDL_GET_BID_REQ_CMD (0x23)
#define EDL_NVM_ACCESS_SET_REQ_CMD (0x01)
#define EDL_PATCH_CONFIG_CMD (0x28)
#define MAX_SIZE_PER_TLV_SEGMENT (243)
#define QCA_PRE_SHUTDOWN_CMD (0xFC08)
#define QCA_DISABLE_LOGGING (0xFC17)
#define EDL_PATCH_CMD_OPCODE 0xFC00
#define EDL_NVM_ACCESS_OPCODE 0xFC0B
#define EDL_WRITE_BD_ADDR_OPCODE 0xFC14
#define EDL_PATCH_CMD_LEN 1
#define EDL_PATCH_VER_REQ_CMD 0x19
#define EDL_PATCH_TLV_REQ_CMD 0x1E
#define EDL_GET_BUILD_INFO_CMD 0x20
#define EDL_GET_BID_REQ_CMD 0x23
#define EDL_NVM_ACCESS_SET_REQ_CMD 0x01
#define EDL_PATCH_CONFIG_CMD 0x28
#define MAX_SIZE_PER_TLV_SEGMENT 243
#define QCA_PRE_SHUTDOWN_CMD 0xFC08
#define QCA_DISABLE_LOGGING 0xFC17
#define EDL_CMD_REQ_RES_EVT (0x00)
#define EDL_PATCH_VER_RES_EVT (0x19)
#define EDL_APP_VER_RES_EVT (0x02)
#define EDL_TVL_DNLD_RES_EVT (0x04)
#define EDL_CMD_EXE_STATUS_EVT (0x00)
#define EDL_SET_BAUDRATE_RSP_EVT (0x92)
#define EDL_NVM_ACCESS_CODE_EVT (0x0B)
#define EDL_PATCH_CONFIG_RES_EVT (0x00)
#define QCA_DISABLE_LOGGING_SUB_OP (0x14)
#define EDL_CMD_REQ_RES_EVT 0x00
#define EDL_PATCH_VER_RES_EVT 0x19
#define EDL_APP_VER_RES_EVT 0x02
#define EDL_TVL_DNLD_RES_EVT 0x04
#define EDL_CMD_EXE_STATUS_EVT 0x00
#define EDL_SET_BAUDRATE_RSP_EVT 0x92
#define EDL_NVM_ACCESS_CODE_EVT 0x0B
#define EDL_PATCH_CONFIG_RES_EVT 0x00
#define QCA_DISABLE_LOGGING_SUB_OP 0x14
#define EDL_TAG_ID_BD_ADDR 2
#define EDL_TAG_ID_HCI (17)
#define EDL_TAG_ID_DEEP_SLEEP (27)
#define EDL_TAG_ID_HCI 17
#define EDL_TAG_ID_DEEP_SLEEP 27
#define QCA_WCN3990_POWERON_PULSE 0xFC
#define QCA_WCN3990_POWEROFF_PULSE 0xC0
@ -39,7 +39,7 @@
#define QCA_HCI_CC_OPCODE 0xFC00
#define QCA_HCI_CC_SUCCESS 0x00
#define QCA_WCN3991_SOC_ID (0x40014320)
#define QCA_WCN3991_SOC_ID 0x40014320
/* QCA chipset version can be decided by patch and SoC
* version, combination with upper 2 bytes from SoC
@ -48,11 +48,11 @@
#define get_soc_ver(soc_id, rom_ver) \
((le32_to_cpu(soc_id) << 16) | (le16_to_cpu(rom_ver)))
#define QCA_HSP_GF_SOC_ID 0x1200
#define QCA_HSP_GF_SOC_MASK 0x0000ff00
#define QCA_HSP_GF_SOC_ID 0x1200
#define QCA_HSP_GF_SOC_MASK 0x0000ff00
enum qca_baudrate {
QCA_BAUDRATE_115200 = 0,
QCA_BAUDRATE_115200 = 0,
QCA_BAUDRATE_57600,
QCA_BAUDRATE_38400,
QCA_BAUDRATE_19200,
@ -71,7 +71,7 @@ enum qca_baudrate {
QCA_BAUDRATE_1600000,
QCA_BAUDRATE_3200000,
QCA_BAUDRATE_3500000,
QCA_BAUDRATE_AUTO = 0xFE,
QCA_BAUDRATE_AUTO = 0xFE,
QCA_BAUDRATE_RESERVED
};

View File

@ -197,7 +197,7 @@ destroy_acl_channel:
return ret;
}
static int btqcomsmd_remove(struct platform_device *pdev)
static void btqcomsmd_remove(struct platform_device *pdev)
{
struct btqcomsmd *btq = platform_get_drvdata(pdev);
@ -206,8 +206,6 @@ static int btqcomsmd_remove(struct platform_device *pdev)
rpmsg_destroy_ept(btq->cmd_channel);
rpmsg_destroy_ept(btq->acl_channel);
return 0;
}
static const struct of_device_id btqcomsmd_of_match[] = {
@ -218,7 +216,7 @@ MODULE_DEVICE_TABLE(of, btqcomsmd_of_match);
static struct platform_driver btqcomsmd_driver = {
.probe = btqcomsmd_probe,
.remove = btqcomsmd_remove,
.remove_new = btqcomsmd_remove,
.driver = {
.name = "btqcomsmd",
.of_match_table = btqcomsmd_of_match,

View File

@ -134,7 +134,6 @@ static int rsi_hci_attach(void *priv, struct rsi_proto_ops *ops)
hdev->bus = HCI_USB;
hci_set_drvdata(hdev, h_adapter);
hdev->dev_type = HCI_PRIMARY;
hdev->open = rsi_hci_open;
hdev->close = rsi_hci_close;
hdev->flush = rsi_hci_flush;

View File

@ -1339,6 +1339,13 @@ int btrtl_setup_realtek(struct hci_dev *hdev)
btrtl_set_quirks(hdev, btrtl_dev);
hci_set_hw_info(hdev,
"RTL lmp_subver=%u hci_rev=%u hci_ver=%u hci_bus=%u",
btrtl_dev->ic_info->lmp_subver,
btrtl_dev->ic_info->hci_rev,
btrtl_dev->ic_info->hci_ver,
btrtl_dev->ic_info->hci_bus);
btrtl_free(btrtl_dev);
return ret;
}

View File

@ -32,9 +32,6 @@ static const struct sdio_device_id btsdio_table[] = {
/* Generic Bluetooth Type-B SDIO device */
{ SDIO_DEVICE_CLASS(SDIO_CLASS_BT_B) },
/* Generic Bluetooth AMP controller */
{ SDIO_DEVICE_CLASS(SDIO_CLASS_BT_AMP) },
{ } /* Terminating entry */
};
@ -319,11 +316,6 @@ static int btsdio_probe(struct sdio_func *func,
hdev->bus = HCI_SDIO;
hci_set_drvdata(hdev, data);
if (id->class == SDIO_CLASS_BT_AMP)
hdev->dev_type = HCI_AMP;
else
hdev->dev_type = HCI_PRIMARY;
data->hdev = hdev;
SET_HCIDEV_DEV(hdev, &func->dev);

View File

@ -477,6 +477,7 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x8087, 0x0033), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x0035), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x0036), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x0037), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x0038), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR },
{ USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL_COMBINED |
@ -588,6 +589,9 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x0489, 0xe0c8), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0489, 0xe0cd), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0489, 0xe0e0), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
@ -597,6 +601,9 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x04ca, 0x3802), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0e8d, 0x0608), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3563), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
@ -612,10 +619,12 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x13d3, 0x3583), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0489, 0xe0cd), .driver_info = BTUSB_MEDIATEK |
{ USB_DEVICE(0x13d3, 0x3606), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0e8d, 0x0608), .driver_info = BTUSB_MEDIATEK |
/* MediaTek MT7922 Bluetooth devices */
{ USB_DEVICE(0x13d3, 0x3585), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
@ -626,12 +635,6 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0489, 0xe0f5), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0489, 0xe0e2), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
@ -656,14 +659,38 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x04ca, 0x3804), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x04ca, 0x38e4), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3605), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3607), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3614), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3615), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x35f5, 0x7922), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
/* Additional MediaTek MT7925 Bluetooth devices */
{ USB_DEVICE(0x0489, 0xe113), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3602), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3603), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
/* Additional Realtek 8723AE Bluetooth devices */
{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
@ -2951,7 +2978,7 @@ static int btusb_mtk_uhw_reg_read(struct btusb_data *data, u32 reg, u32 *val)
err = usb_control_msg(data->udev, pipe, 0x01,
0xDE,
reg >> 16, reg & 0xffff,
buf, 4, USB_CTRL_SET_TIMEOUT);
buf, 4, USB_CTRL_GET_TIMEOUT);
if (err < 0) {
bt_dev_err(hdev, "Failed to read uhw reg(%d)", err);
goto err_free_buf;
@ -2979,7 +3006,7 @@ static int btusb_mtk_reg_read(struct btusb_data *data, u32 reg, u32 *val)
err = usb_control_msg(data->udev, pipe, 0x63,
USB_TYPE_VENDOR | USB_DIR_IN,
reg >> 16, reg & 0xffff,
buf, size, USB_CTRL_SET_TIMEOUT);
buf, size, USB_CTRL_GET_TIMEOUT);
if (err < 0)
goto err_free_buf;
@ -3118,6 +3145,7 @@ static int btusb_mtk_setup(struct hci_dev *hdev)
bt_dev_err(hdev, "Failed to get fw flavor (%d)", err);
return err;
}
fw_flavor = (fw_flavor & 0x00000080) >> 7;
}
mediatek = hci_get_priv(hdev);
@ -3693,7 +3721,7 @@ static int btusb_qca_send_vendor_req(struct usb_device *udev, u8 request,
*/
pipe = usb_rcvctrlpipe(udev, 0);
err = usb_control_msg(udev, pipe, request, USB_TYPE_VENDOR | USB_DIR_IN,
0, 0, buf, size, USB_CTRL_SET_TIMEOUT);
0, 0, buf, size, USB_CTRL_GET_TIMEOUT);
if (err < 0) {
dev_err(&udev->dev, "Failed to access otp area (%d)", err);
goto done;
@ -4331,11 +4359,6 @@ static int btusb_probe(struct usb_interface *intf,
hdev->bus = HCI_USB;
hci_set_drvdata(hdev, data);
if (id->driver_info & BTUSB_AMP)
hdev->dev_type = HCI_AMP;
else
hdev->dev_type = HCI_PRIMARY;
data->hdev = hdev;
SET_HCIDEV_DEV(hdev, &intf->dev);

View File

@ -1293,7 +1293,7 @@ static int bcm_probe(struct platform_device *pdev)
return 0;
}
static int bcm_remove(struct platform_device *pdev)
static void bcm_remove(struct platform_device *pdev)
{
struct bcm_device *dev = platform_get_drvdata(pdev);
@ -1302,8 +1302,6 @@ static int bcm_remove(struct platform_device *pdev)
mutex_unlock(&bcm_device_lock);
dev_info(&pdev->dev, "%s device unregistered.\n", dev->name);
return 0;
}
static const struct hci_uart_proto bcm_proto = {
@ -1487,7 +1485,7 @@ static const struct acpi_device_id bcm_acpi_match[] = {
{ "BCM2EA1" },
{ "BCM2EA2", (long)&bcm43430_device_data },
{ "BCM2EA3", (long)&bcm43430_device_data },
{ "BCM2EA4" },
{ "BCM2EA4", (long)&bcm43430_device_data }, /* bcm43455 */
{ "BCM2EA5" },
{ "BCM2EA6" },
{ "BCM2EA7" },
@ -1509,7 +1507,7 @@ static const struct dev_pm_ops bcm_pm_ops = {
static struct platform_driver bcm_driver = {
.probe = bcm_probe,
.remove = bcm_remove,
.remove_new = bcm_remove,
.driver = {
.name = "hci_bcm",
.acpi_match_table = ACPI_PTR(bcm_acpi_match),

View File

@ -2361,7 +2361,6 @@ static int bcm4377_probe(struct pci_dev *pdev, const struct pci_device_id *id)
bcm4377->hdev = hdev;
hdev->bus = HCI_PCI;
hdev->dev_type = HCI_PRIMARY;
hdev->open = bcm4377_hci_open;
hdev->close = bcm4377_hci_close;
hdev->send = bcm4377_hci_send_frame;

View File

@ -537,7 +537,7 @@ static int intel_setup(struct hci_uart *hu)
int speed_change = 0;
int err;
bt_dev_dbg(hdev, "start intel_setup");
bt_dev_dbg(hdev, "");
hu->hdev->set_diag = btintel_set_diag;
hu->hdev->set_bdaddr = btintel_set_bdaddr;
@ -591,12 +591,12 @@ static int intel_setup(struct hci_uart *hu)
return -EINVAL;
}
/* Check for supported iBT hardware variants of this firmware
* loading method.
*
* This check has been put in place to ensure correct forward
* compatibility options when newer hardware variants come along.
*/
/* Check for supported iBT hardware variants of this firmware
* loading method.
*
* This check has been put in place to ensure correct forward
* compatibility options when newer hardware variants come along.
*/
switch (ver.hw_variant) {
case 0x0b: /* LnP */
case 0x0c: /* WsP */
@ -777,7 +777,7 @@ static int intel_setup(struct hci_uart *hu)
rettime = ktime_get();
delta = ktime_sub(rettime, calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
duration = (unsigned long long)ktime_to_ns(delta) >> 10;
bt_dev_info(hdev, "Firmware loaded in %llu usecs", duration);
@ -822,7 +822,7 @@ done:
rettime = ktime_get();
delta = ktime_sub(rettime, calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
duration = (unsigned long long)ktime_to_ns(delta) >> 10;
bt_dev_info(hdev, "Device booted in %llu usecs", duration);
@ -977,6 +977,7 @@ static int intel_recv(struct hci_uart *hu, const void *data, int count)
ARRAY_SIZE(intel_recv_pkts));
if (IS_ERR(intel->rx_skb)) {
int err = PTR_ERR(intel->rx_skb);
bt_dev_err(hu->hdev, "Frame reassembly failed (%d)", err);
intel->rx_skb = NULL;
return err;
@ -1190,7 +1191,7 @@ no_irq:
return 0;
}
static int intel_remove(struct platform_device *pdev)
static void intel_remove(struct platform_device *pdev)
{
struct intel_device *idev = platform_get_drvdata(pdev);
@ -1201,13 +1202,11 @@ static int intel_remove(struct platform_device *pdev)
mutex_unlock(&intel_device_list_lock);
dev_info(&pdev->dev, "unregistered.\n");
return 0;
}
static struct platform_driver intel_driver = {
.probe = intel_probe,
.remove = intel_remove,
.remove_new = intel_remove,
.driver = {
.name = "hci_intel",
.acpi_match_table = ACPI_PTR(intel_acpi_match),

View File

@ -667,11 +667,6 @@ static int hci_uart_register_dev(struct hci_uart *hu)
if (!test_bit(HCI_UART_RESET_ON_INIT, &hu->hdev_flags))
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
if (test_bit(HCI_UART_CREATE_AMP, &hu->hdev_flags))
hdev->dev_type = HCI_AMP;
else
hdev->dev_type = HCI_PRIMARY;
/* Only call open() for the protocol after hdev is fully initialized as
* open() (or a timer/workqueue it starts) may attempt to reference it.
*/
@ -722,7 +717,6 @@ static int hci_uart_set_flags(struct hci_uart *hu, unsigned long flags)
{
unsigned long valid_flags = BIT(HCI_UART_RAW_DEVICE) |
BIT(HCI_UART_RESET_ON_INIT) |
BIT(HCI_UART_CREATE_AMP) |
BIT(HCI_UART_INIT_PENDING) |
BIT(HCI_UART_EXT_CONFIG) |
BIT(HCI_UART_VND_DETECT);

View File

@ -366,11 +366,6 @@ int hci_uart_register_device_priv(struct hci_uart *hu,
if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags))
set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
if (test_bit(HCI_UART_CREATE_AMP, &hu->hdev_flags))
hdev->dev_type = HCI_AMP;
else
hdev->dev_type = HCI_PRIMARY;
if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
return 0;

View File

@ -37,7 +37,6 @@
#define HCI_UART_RAW_DEVICE 0
#define HCI_UART_RESET_ON_INIT 1
#define HCI_UART_CREATE_AMP 2
#define HCI_UART_INIT_PENDING 3
#define HCI_UART_EXT_CONFIG 4
#define HCI_UART_VND_DETECT 5

View File

@ -384,17 +384,10 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
{
struct hci_dev *hdev;
struct sk_buff *skb;
__u8 dev_type;
if (data->hdev)
return -EBADFD;
/* bits 0-1 are dev_type (Primary or AMP) */
dev_type = opcode & 0x03;
if (dev_type != HCI_PRIMARY && dev_type != HCI_AMP)
return -EINVAL;
/* bits 2-5 are reserved (must be zero) */
if (opcode & 0x3c)
return -EINVAL;
@ -412,7 +405,6 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
data->hdev = hdev;
hdev->bus = HCI_VIRTUAL;
hdev->dev_type = dev_type;
hci_set_drvdata(hdev, data);
hdev->open = vhci_open_dev;
@ -634,7 +626,7 @@ static void vhci_open_timeout(struct work_struct *work)
struct vhci_data *data = container_of(work, struct vhci_data,
open_timeout.work);
vhci_create_device(data, amp ? HCI_AMP : HCI_PRIMARY);
vhci_create_device(data, 0x00);
}
static int vhci_open(struct inode *inode, struct file *file)

View File

@ -274,7 +274,6 @@ static int virtbt_probe(struct virtio_device *vdev)
switch (type) {
case VIRTIO_BT_CONFIG_TYPE_PRIMARY:
case VIRTIO_BT_CONFIG_TYPE_AMP:
break;
default:
return -EINVAL;
@ -303,7 +302,6 @@ static int virtbt_probe(struct virtio_device *vdev)
vbt->hdev = hdev;
hdev->bus = HCI_VIRTIO;
hdev->dev_type = type;
hci_set_drvdata(hdev, vbt);
hdev->open = virtbt_open;

View File

@ -285,7 +285,7 @@ void bt_err_ratelimited(const char *fmt, ...);
bt_err_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
/* Connection and socket states */
enum {
enum bt_sock_state {
BT_CONNECTED = 1, /* Equal to TCP_ESTABLISHED to make net code happy */
BT_OPEN,
BT_BOUND,

View File

@ -33,9 +33,6 @@
#define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4)
#define HCI_LINK_KEY_SIZE 16
#define HCI_AMP_LINK_KEY_SIZE (2 * HCI_LINK_KEY_SIZE)
#define HCI_MAX_AMP_ASSOC_SIZE 672
#define HCI_MAX_CPB_DATA_SIZE 252
@ -71,26 +68,6 @@
#define HCI_SMD 9
#define HCI_VIRTIO 10
/* HCI controller types */
#define HCI_PRIMARY 0x00
#define HCI_AMP 0x01
/* First BR/EDR Controller shall have ID = 0 */
#define AMP_ID_BREDR 0x00
/* AMP controller types */
#define AMP_TYPE_BREDR 0x00
#define AMP_TYPE_80211 0x01
/* AMP controller status */
#define AMP_STATUS_POWERED_DOWN 0x00
#define AMP_STATUS_BLUETOOTH_ONLY 0x01
#define AMP_STATUS_NO_CAPACITY 0x02
#define AMP_STATUS_LOW_CAPACITY 0x03
#define AMP_STATUS_MEDIUM_CAPACITY 0x04
#define AMP_STATUS_HIGH_CAPACITY 0x05
#define AMP_STATUS_FULL_CAPACITY 0x06
/* HCI device quirks */
enum {
/* When this quirk is set, the HCI Reset command is send when
@ -456,7 +433,6 @@ enum {
#define HCI_AUTO_OFF_TIMEOUT msecs_to_jiffies(2000) /* 2 seconds */
#define HCI_ACL_CONN_TIMEOUT msecs_to_jiffies(20000) /* 20 seconds */
#define HCI_LE_CONN_TIMEOUT msecs_to_jiffies(20000) /* 20 seconds */
#define HCI_LE_AUTOCONN_TIMEOUT msecs_to_jiffies(4000) /* 4 seconds */
/* HCI data types */
#define HCI_COMMAND_PKT 0x01
@ -528,7 +504,6 @@ enum {
#define ESCO_LINK 0x02
/* Low Energy links do not have defined link type. Use invented one */
#define LE_LINK 0x80
#define AMP_LINK 0x81
#define ISO_LINK 0x82
#define INVALID_LINK 0xff
@ -944,56 +919,6 @@ struct hci_cp_io_capability_neg_reply {
__u8 reason;
} __packed;
#define HCI_OP_CREATE_PHY_LINK 0x0435
struct hci_cp_create_phy_link {
__u8 phy_handle;
__u8 key_len;
__u8 key_type;
__u8 key[HCI_AMP_LINK_KEY_SIZE];
} __packed;
#define HCI_OP_ACCEPT_PHY_LINK 0x0436
struct hci_cp_accept_phy_link {
__u8 phy_handle;
__u8 key_len;
__u8 key_type;
__u8 key[HCI_AMP_LINK_KEY_SIZE];
} __packed;
#define HCI_OP_DISCONN_PHY_LINK 0x0437
struct hci_cp_disconn_phy_link {
__u8 phy_handle;
__u8 reason;
} __packed;
struct ext_flow_spec {
__u8 id;
__u8 stype;
__le16 msdu;
__le32 sdu_itime;
__le32 acc_lat;
__le32 flush_to;
} __packed;
#define HCI_OP_CREATE_LOGICAL_LINK 0x0438
#define HCI_OP_ACCEPT_LOGICAL_LINK 0x0439
struct hci_cp_create_accept_logical_link {
__u8 phy_handle;
struct ext_flow_spec tx_flow_spec;
struct ext_flow_spec rx_flow_spec;
} __packed;
#define HCI_OP_DISCONN_LOGICAL_LINK 0x043a
struct hci_cp_disconn_logical_link {
__le16 log_handle;
} __packed;
#define HCI_OP_LOGICAL_LINK_CANCEL 0x043b
struct hci_cp_logical_link_cancel {
__u8 phy_handle;
__u8 flow_spec_id;
} __packed;
#define HCI_OP_ENHANCED_SETUP_SYNC_CONN 0x043d
struct hci_coding_format {
__u8 id;
@ -1615,46 +1540,6 @@ struct hci_rp_read_enc_key_size {
__u8 key_size;
} __packed;
#define HCI_OP_READ_LOCAL_AMP_INFO 0x1409
struct hci_rp_read_local_amp_info {
__u8 status;
__u8 amp_status;
__le32 total_bw;
__le32 max_bw;
__le32 min_latency;
__le32 max_pdu;
__u8 amp_type;
__le16 pal_cap;
__le16 max_assoc_size;
__le32 max_flush_to;
__le32 be_flush_to;
} __packed;
#define HCI_OP_READ_LOCAL_AMP_ASSOC 0x140a
struct hci_cp_read_local_amp_assoc {
__u8 phy_handle;
__le16 len_so_far;
__le16 max_len;
} __packed;
struct hci_rp_read_local_amp_assoc {
__u8 status;
__u8 phy_handle;
__le16 rem_len;
__u8 frag[];
} __packed;
#define HCI_OP_WRITE_REMOTE_AMP_ASSOC 0x140b
struct hci_cp_write_remote_amp_assoc {
__u8 phy_handle;
__le16 len_so_far;
__le16 rem_len;
__u8 frag[];
} __packed;
struct hci_rp_write_remote_amp_assoc {
__u8 status;
__u8 phy_handle;
} __packed;
#define HCI_OP_GET_MWS_TRANSPORT_CONFIG 0x140c
#define HCI_OP_ENABLE_DUT_MODE 0x1803
@ -1666,6 +1551,15 @@ struct hci_cp_le_set_event_mask {
__u8 mask[8];
} __packed;
/* BLUETOOTH CORE SPECIFICATION Version 5.4 | Vol 4, Part E
* 7.8.2 LE Read Buffer Size command
* MAX_LE_MTU is 0xffff.
* 0 is also valid. It means that no dedicated LE Buffer exists.
* It should use the HCI_Read_Buffer_Size command and mtu is shared
* between BR/EDR and LE.
*/
#define HCI_MIN_LE_MTU 0x001b
#define HCI_OP_LE_READ_BUFFER_SIZE 0x2002
struct hci_rp_le_read_buffer_size {
__u8 status;
@ -2026,7 +1920,7 @@ struct hci_cp_le_set_ext_adv_data {
__u8 operation;
__u8 frag_pref;
__u8 length;
__u8 data[];
__u8 data[] __counted_by(length);
} __packed;
#define HCI_OP_LE_SET_EXT_SCAN_RSP_DATA 0x2038
@ -2035,7 +1929,7 @@ struct hci_cp_le_set_ext_scan_rsp_data {
__u8 operation;
__u8 frag_pref;
__u8 length;
__u8 data[];
__u8 data[] __counted_by(length);
} __packed;
#define HCI_OP_LE_SET_EXT_ADV_ENABLE 0x2039
@ -2061,7 +1955,7 @@ struct hci_cp_le_set_per_adv_data {
__u8 handle;
__u8 operation;
__u8 length;
__u8 data[];
__u8 data[] __counted_by(length);
} __packed;
#define HCI_OP_LE_SET_PER_ADV_ENABLE 0x2040
@ -2144,7 +2038,7 @@ struct hci_cp_le_set_cig_params {
__le16 c_latency;
__le16 p_latency;
__u8 num_cis;
struct hci_cis_params cis[];
struct hci_cis_params cis[] __counted_by(num_cis);
} __packed;
struct hci_rp_le_set_cig_params {
@ -2162,7 +2056,7 @@ struct hci_cis {
struct hci_cp_le_create_cis {
__u8 num_cis;
struct hci_cis cis[];
struct hci_cis cis[] __counted_by(num_cis);
} __packed;
#define HCI_OP_LE_REMOVE_CIG 0x2065
@ -2216,7 +2110,7 @@ struct hci_cp_le_big_create_sync {
__u8 mse;
__le16 timeout;
__u8 num_bis;
__u8 bis[];
__u8 bis[] __counted_by(num_bis);
} __packed;
#define HCI_OP_LE_BIG_TERM_SYNC 0x206c

View File

@ -126,7 +126,6 @@ enum suspended_state {
struct hci_conn_hash {
struct list_head list;
unsigned int acl_num;
unsigned int amp_num;
unsigned int sco_num;
unsigned int iso_num;
unsigned int le_num;
@ -247,6 +246,7 @@ struct adv_info {
bool periodic;
__u8 mesh;
__u8 instance;
__u8 handle;
__u32 flags;
__u16 timeout;
__u16 remaining_time;
@ -341,14 +341,6 @@ struct adv_monitor {
/* Default authenticated payload timeout 30s */
#define DEFAULT_AUTH_PAYLOAD_TIMEOUT 0x0bb8
struct amp_assoc {
__u16 len;
__u16 offset;
__u16 rem_len;
__u16 len_so_far;
__u8 data[HCI_MAX_AMP_ASSOC_SIZE];
};
#define HCI_MAX_PAGES 3
struct hci_dev {
@ -361,7 +353,6 @@ struct hci_dev {
unsigned long flags;
__u16 id;
__u8 bus;
__u8 dev_type;
bdaddr_t bdaddr;
bdaddr_t setup_addr;
bdaddr_t public_addr;
@ -467,21 +458,6 @@ struct hci_dev {
__u16 sniff_min_interval;
__u16 sniff_max_interval;
__u8 amp_status;
__u32 amp_total_bw;
__u32 amp_max_bw;
__u32 amp_min_latency;
__u32 amp_max_pdu;
__u8 amp_type;
__u16 amp_pal_cap;
__u16 amp_assoc_size;
__u32 amp_max_flush_to;
__u32 amp_be_flush_to;
struct amp_assoc loc_assoc;
__u8 flow_ctl_mode;
unsigned int auto_accept_delay;
unsigned long quirks;
@ -501,11 +477,6 @@ struct hci_dev {
unsigned int le_pkts;
unsigned int iso_pkts;
__u16 block_len;
__u16 block_mtu;
__u16 num_blocks;
__u16 block_cnt;
unsigned long acl_last_tx;
unsigned long sco_last_tx;
unsigned long le_last_tx;
@ -706,6 +677,7 @@ struct hci_conn {
__u16 handle;
__u16 sync_handle;
__u16 state;
__u16 mtu;
__u8 mode;
__u8 type;
__u8 role;
@ -777,7 +749,6 @@ struct hci_conn {
void *l2cap_data;
void *sco_data;
void *iso_data;
struct amp_mgr *amp_mgr;
struct list_head link_list;
struct hci_conn *parent;
@ -804,7 +775,6 @@ struct hci_chan {
struct sk_buff_head data_q;
unsigned int sent;
__u8 state;
bool amp;
};
struct hci_conn_params {
@ -1013,9 +983,6 @@ static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c)
case ACL_LINK:
h->acl_num++;
break;
case AMP_LINK:
h->amp_num++;
break;
case LE_LINK:
h->le_num++;
if (c->role == HCI_ROLE_SLAVE)
@ -1042,9 +1009,6 @@ static inline void hci_conn_hash_del(struct hci_dev *hdev, struct hci_conn *c)
case ACL_LINK:
h->acl_num--;
break;
case AMP_LINK:
h->amp_num--;
break;
case LE_LINK:
h->le_num--;
if (c->role == HCI_ROLE_SLAVE)
@ -1066,8 +1030,6 @@ static inline unsigned int hci_conn_num(struct hci_dev *hdev, __u8 type)
switch (type) {
case ACL_LINK:
return h->acl_num;
case AMP_LINK:
return h->amp_num;
case LE_LINK:
return h->le_num;
case SCO_LINK:
@ -1084,7 +1046,7 @@ static inline unsigned int hci_conn_count(struct hci_dev *hdev)
{
struct hci_conn_hash *c = &hdev->conn_hash;
return c->acl_num + c->amp_num + c->sco_num + c->le_num + c->iso_num;
return c->acl_num + c->sco_num + c->le_num + c->iso_num;
}
static inline bool hci_conn_valid(struct hci_dev *hdev, struct hci_conn *conn)
@ -1373,8 +1335,7 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
rcu_read_lock();
list_for_each_entry_rcu(c, &h->list, list) {
if (c->type != ISO_LINK ||
!test_bit(HCI_CONN_PA_SYNC, &c->flags))
if (c->type != ISO_LINK)
continue;
if (c->sync_handle == sync_handle) {
@ -1610,10 +1571,6 @@ static inline void hci_conn_drop(struct hci_conn *conn)
}
break;
case AMP_LINK:
timeo = conn->disc_timeout;
break;
default:
timeo = 0;
break;
@ -2235,8 +2192,22 @@ void hci_mgmt_chan_unregister(struct hci_mgmt_chan *c);
/* These LE scan and inquiry parameters were chosen according to LE General
* Discovery Procedure specification.
*/
#define DISCOV_LE_SCAN_WIN 0x12
#define DISCOV_LE_SCAN_INT 0x12
#define DISCOV_LE_SCAN_WIN 0x0012 /* 11.25 msec */
#define DISCOV_LE_SCAN_INT 0x0012 /* 11.25 msec */
#define DISCOV_LE_SCAN_INT_FAST 0x0060 /* 60 msec */
#define DISCOV_LE_SCAN_WIN_FAST 0x0030 /* 30 msec */
#define DISCOV_LE_SCAN_INT_CONN 0x0060 /* 60 msec */
#define DISCOV_LE_SCAN_WIN_CONN 0x0060 /* 60 msec */
#define DISCOV_LE_SCAN_INT_SLOW1 0x0800 /* 1.28 sec */
#define DISCOV_LE_SCAN_WIN_SLOW1 0x0012 /* 11.25 msec */
#define DISCOV_LE_SCAN_INT_SLOW2 0x1000 /* 2.56 sec */
#define DISCOV_LE_SCAN_WIN_SLOW2 0x0024 /* 22.5 msec */
#define DISCOV_CODED_SCAN_INT_FAST 0x0120 /* 180 msec */
#define DISCOV_CODED_SCAN_WIN_FAST 0x0090 /* 90 msec */
#define DISCOV_CODED_SCAN_INT_SLOW1 0x1800 /* 3.84 sec */
#define DISCOV_CODED_SCAN_WIN_SLOW1 0x0036 /* 33.75 msec */
#define DISCOV_CODED_SCAN_INT_SLOW2 0x3000 /* 7.68 sec */
#define DISCOV_CODED_SCAN_WIN_SLOW2 0x006c /* 67.5 msec */
#define DISCOV_LE_TIMEOUT 10240 /* msec */
#define DISCOV_INTERLEAVED_TIMEOUT 5120 /* msec */
#define DISCOV_INTERLEAVED_INQUIRY_LEN 0x04

View File

@ -463,18 +463,24 @@ struct l2cap_le_credits {
#define L2CAP_ECRED_MAX_CID 5
struct l2cap_ecred_conn_req {
__le16 psm;
__le16 mtu;
__le16 mps;
__le16 credits;
/* New members must be added within the struct_group() macro below. */
__struct_group(l2cap_ecred_conn_req_hdr, hdr, __packed,
__le16 psm;
__le16 mtu;
__le16 mps;
__le16 credits;
);
__le16 scid[];
} __packed;
struct l2cap_ecred_conn_rsp {
__le16 mtu;
__le16 mps;
__le16 credits;
__le16 result;
/* New members must be added within the struct_group() macro below. */
struct_group_tagged(l2cap_ecred_conn_rsp_hdr, hdr,
__le16 mtu;
__le16 mps;
__le16 credits;
__le16 result;
);
__le16 dcid[];
};
@ -548,6 +554,9 @@ struct l2cap_chan {
__u16 tx_credits;
__u16 rx_credits;
/* estimated available receive buffer space or -1 if unknown */
ssize_t rx_avail;
__u8 tx_state;
__u8 rx_state;
@ -682,10 +691,15 @@ struct l2cap_user {
/* ----- L2CAP socket info ----- */
#define l2cap_pi(sk) ((struct l2cap_pinfo *) sk)
struct l2cap_rx_busy {
struct list_head list;
struct sk_buff *skb;
};
struct l2cap_pinfo {
struct bt_sock bt;
struct l2cap_chan *chan;
struct sk_buff *rx_busy_skb;
struct list_head rx_busy;
};
enum {
@ -943,6 +957,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
int l2cap_chan_reconfigure(struct l2cap_chan *chan, __u16 mtu);
int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len);
void l2cap_chan_busy(struct l2cap_chan *chan, int busy);
void l2cap_chan_rx_avail(struct l2cap_chan *chan, ssize_t rx_avail);
int l2cap_chan_check_security(struct l2cap_chan *chan, bool initiator);
void l2cap_chan_set_defaults(struct l2cap_chan *chan);
int l2cap_ertm_init(struct l2cap_chan *chan);

View File

@ -13,7 +13,6 @@
enum virtio_bt_config_type {
VIRTIO_BT_CONFIG_TYPE_PRIMARY = 0,
VIRTIO_BT_CONFIG_TYPE_AMP = 1,
};
enum virtio_bt_config_vendor {

View File

@ -241,13 +241,13 @@ static int configure_datapath_sync(struct hci_dev *hdev, struct bt_codec *codec)
__u8 vnd_len, *vnd_data = NULL;
struct hci_op_configure_data_path *cmd = NULL;
/* Do not take below 2 checks as error since the 1st means user do not
* want to use HFP offload mode and the 2nd means the vendor controller
* do not need to send below HCI command for offload mode.
*/
if (!codec->data_path || !hdev->get_codec_config_data)
return 0;
/* Do not take me as error */
if (!hdev->get_codec_config_data)
return 0;
err = hdev->get_codec_config_data(hdev, ESCO_LINK, codec, &vnd_len,
&vnd_data);
if (err < 0)
@ -664,11 +664,6 @@ static void le_conn_timeout(struct work_struct *work)
hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
}
struct iso_cig_params {
struct hci_cp_le_set_cig_params cp;
struct hci_cis_params cis[0x1f];
};
struct iso_list_data {
union {
u8 cig;
@ -909,11 +904,37 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
{
struct hci_conn *conn;
switch (type) {
case ACL_LINK:
if (!hdev->acl_mtu)
return ERR_PTR(-ECONNREFUSED);
break;
case ISO_LINK:
if (hdev->iso_mtu)
/* Dedicated ISO Buffer exists */
break;
fallthrough;
case LE_LINK:
if (hdev->le_mtu && hdev->le_mtu < HCI_MIN_LE_MTU)
return ERR_PTR(-ECONNREFUSED);
if (!hdev->le_mtu && hdev->acl_mtu < HCI_MIN_LE_MTU)
return ERR_PTR(-ECONNREFUSED);
break;
case SCO_LINK:
case ESCO_LINK:
if (!hdev->sco_pkts)
/* Controller does not support SCO or eSCO over HCI */
return ERR_PTR(-ECONNREFUSED);
break;
default:
return ERR_PTR(-ECONNREFUSED);
}
bt_dev_dbg(hdev, "dst %pMR handle 0x%4.4x", dst, handle);
conn = kzalloc(sizeof(*conn), GFP_KERNEL);
if (!conn)
return NULL;
return ERR_PTR(-ENOMEM);
bacpy(&conn->dst, dst);
bacpy(&conn->src, &hdev->bdaddr);
@ -944,10 +965,12 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
switch (type) {
case ACL_LINK:
conn->pkt_type = hdev->pkt_type & ACL_PTYPE_MASK;
conn->mtu = hdev->acl_mtu;
break;
case LE_LINK:
/* conn->src should reflect the local identity address */
hci_copy_identity_address(hdev, &conn->src, &conn->src_type);
conn->mtu = hdev->le_mtu ? hdev->le_mtu : hdev->acl_mtu;
break;
case ISO_LINK:
/* conn->src should reflect the local identity address */
@ -959,6 +982,8 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
else if (conn->role == HCI_ROLE_MASTER)
conn->cleanup = cis_cleanup;
conn->mtu = hdev->iso_mtu ? hdev->iso_mtu :
hdev->le_mtu ? hdev->le_mtu : hdev->acl_mtu;
break;
case SCO_LINK:
if (lmp_esco_capable(hdev))
@ -966,9 +991,12 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
(hdev->esco_type & EDR_ESCO_MASK);
else
conn->pkt_type = hdev->pkt_type & SCO_PTYPE_MASK;
conn->mtu = hdev->sco_mtu;
break;
case ESCO_LINK:
conn->pkt_type = hdev->esco_type & ~EDR_ESCO_MASK;
conn->mtu = hdev->sco_mtu;
break;
}
@ -1011,7 +1039,7 @@ struct hci_conn *hci_conn_add_unset(struct hci_dev *hdev, int type,
handle = hci_conn_hash_alloc_unset(hdev);
if (unlikely(handle < 0))
return NULL;
return ERR_PTR(-ECONNREFUSED);
return hci_conn_add(hdev, type, dst, role, handle);
}
@ -1140,8 +1168,7 @@ struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src, uint8_t src_type)
list_for_each_entry(d, &hci_dev_list, list) {
if (!test_bit(HCI_UP, &d->flags) ||
hci_dev_test_flag(d, HCI_USER_CHANNEL) ||
d->dev_type != HCI_PRIMARY)
hci_dev_test_flag(d, HCI_USER_CHANNEL))
continue;
/* Simple routing:
@ -1317,8 +1344,8 @@ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst,
bacpy(&conn->dst, dst);
} else {
conn = hci_conn_add_unset(hdev, LE_LINK, dst, role);
if (!conn)
return ERR_PTR(-ENOMEM);
if (IS_ERR(conn))
return conn;
hci_conn_hold(conn);
conn->pending_sec_level = sec_level;
}
@ -1494,8 +1521,8 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
return ERR_PTR(-EADDRINUSE);
conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
if (!conn)
return ERR_PTR(-ENOMEM);
if (IS_ERR(conn))
return conn;
conn->state = BT_CONNECT;
@ -1538,8 +1565,8 @@ struct hci_conn *hci_connect_le_scan(struct hci_dev *hdev, bdaddr_t *dst,
BT_DBG("requesting refresh of dst_addr");
conn = hci_conn_add_unset(hdev, LE_LINK, dst, HCI_ROLE_MASTER);
if (!conn)
return ERR_PTR(-ENOMEM);
if (IS_ERR(conn))
return conn;
if (hci_explicit_conn_params_set(hdev, dst, dst_type) < 0) {
hci_conn_del(conn);
@ -1586,8 +1613,8 @@ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst,
acl = hci_conn_hash_lookup_ba(hdev, ACL_LINK, dst);
if (!acl) {
acl = hci_conn_add_unset(hdev, ACL_LINK, dst, HCI_ROLE_MASTER);
if (!acl)
return ERR_PTR(-ENOMEM);
if (IS_ERR(acl))
return acl;
}
hci_conn_hold(acl);
@ -1655,9 +1682,9 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
sco = hci_conn_hash_lookup_ba(hdev, type, dst);
if (!sco) {
sco = hci_conn_add_unset(hdev, type, dst, HCI_ROLE_MASTER);
if (!sco) {
if (IS_ERR(sco)) {
hci_conn_drop(acl);
return ERR_PTR(-ENOMEM);
return sco;
}
}
@ -1722,34 +1749,33 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
static int set_cig_params_sync(struct hci_dev *hdev, void *data)
{
DEFINE_FLEX(struct hci_cp_le_set_cig_params, pdu, cis, num_cis, 0x1f);
u8 cig_id = PTR_UINT(data);
struct hci_conn *conn;
struct bt_iso_qos *qos;
struct iso_cig_params pdu;
u8 aux_num_cis = 0;
u8 cis_id;
conn = hci_conn_hash_lookup_cig(hdev, cig_id);
if (!conn)
return 0;
memset(&pdu, 0, sizeof(pdu));
qos = &conn->iso_qos;
pdu.cp.cig_id = cig_id;
hci_cpu_to_le24(qos->ucast.out.interval, pdu.cp.c_interval);
hci_cpu_to_le24(qos->ucast.in.interval, pdu.cp.p_interval);
pdu.cp.sca = qos->ucast.sca;
pdu.cp.packing = qos->ucast.packing;
pdu.cp.framing = qos->ucast.framing;
pdu.cp.c_latency = cpu_to_le16(qos->ucast.out.latency);
pdu.cp.p_latency = cpu_to_le16(qos->ucast.in.latency);
pdu->cig_id = cig_id;
hci_cpu_to_le24(qos->ucast.out.interval, pdu->c_interval);
hci_cpu_to_le24(qos->ucast.in.interval, pdu->p_interval);
pdu->sca = qos->ucast.sca;
pdu->packing = qos->ucast.packing;
pdu->framing = qos->ucast.framing;
pdu->c_latency = cpu_to_le16(qos->ucast.out.latency);
pdu->p_latency = cpu_to_le16(qos->ucast.in.latency);
/* Reprogram all CIS(s) with the same CIG, valid range are:
* num_cis: 0x00 to 0x1F
* cis_id: 0x00 to 0xEF
*/
for (cis_id = 0x00; cis_id < 0xf0 &&
pdu.cp.num_cis < ARRAY_SIZE(pdu.cis); cis_id++) {
aux_num_cis < pdu->num_cis; cis_id++) {
struct hci_cis_params *cis;
conn = hci_conn_hash_lookup_cis(hdev, NULL, 0, cig_id, cis_id);
@ -1758,7 +1784,7 @@ static int set_cig_params_sync(struct hci_dev *hdev, void *data)
qos = &conn->iso_qos;
cis = &pdu.cis[pdu.cp.num_cis++];
cis = &pdu->cis[aux_num_cis++];
cis->cis_id = cis_id;
cis->c_sdu = cpu_to_le16(conn->iso_qos.ucast.out.sdu);
cis->p_sdu = cpu_to_le16(conn->iso_qos.ucast.in.sdu);
@ -1769,14 +1795,14 @@ static int set_cig_params_sync(struct hci_dev *hdev, void *data)
cis->c_rtn = qos->ucast.out.rtn;
cis->p_rtn = qos->ucast.in.rtn;
}
pdu->num_cis = aux_num_cis;
if (!pdu.cp.num_cis)
if (!pdu->num_cis)
return 0;
return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_CIG_PARAMS,
sizeof(pdu.cp) +
pdu.cp.num_cis * sizeof(pdu.cis[0]), &pdu,
HCI_CMD_TIMEOUT);
struct_size(pdu, cis, pdu->num_cis),
pdu, HCI_CMD_TIMEOUT);
}
static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
@ -1847,8 +1873,8 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
qos->ucast.cis);
if (!cis) {
cis = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
if (!cis)
return ERR_PTR(-ENOMEM);
if (IS_ERR(cis))
return cis;
cis->cleanup = cis_cleanup;
cis->dst_type = dst_type;
cis->iso_qos.ucast.cig = BT_ISO_QOS_CIG_UNSET;
@ -1983,14 +2009,8 @@ static void hci_iso_qos_setup(struct hci_dev *hdev, struct hci_conn *conn,
struct bt_iso_io_qos *qos, __u8 phy)
{
/* Only set MTU if PHY is enabled */
if (!qos->sdu && qos->phy) {
if (hdev->iso_mtu > 0)
qos->sdu = hdev->iso_mtu;
else if (hdev->le_mtu > 0)
qos->sdu = hdev->le_mtu;
else
qos->sdu = hdev->acl_mtu;
}
if (!qos->sdu && qos->phy)
qos->sdu = conn->mtu;
/* Use the same PHY as ACL if set to any */
if (qos->phy == BT_ISO_PHY_ANY)
@ -2071,8 +2091,8 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
return ERR_PTR(-EBUSY);
conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE);
if (!conn)
return ERR_PTR(-ENOMEM);
if (IS_ERR(conn))
return conn;
conn->iso_qos = *qos;
conn->state = BT_LISTEN;
@ -2109,13 +2129,10 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
struct bt_iso_qos *qos,
__u16 sync_handle, __u8 num_bis, __u8 bis[])
{
struct _packed {
struct hci_cp_le_big_create_sync cp;
__u8 bis[0x11];
} pdu;
DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
int err;
if (num_bis < 0x01 || num_bis > sizeof(pdu.bis))
if (num_bis < 0x01 || num_bis > pdu->num_bis)
return -EINVAL;
err = qos_set_big(hdev, qos);
@ -2125,18 +2142,17 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
if (hcon)
hcon->iso_qos.bcast.big = qos->bcast.big;
memset(&pdu, 0, sizeof(pdu));
pdu.cp.handle = qos->bcast.big;
pdu.cp.sync_handle = cpu_to_le16(sync_handle);
pdu.cp.encryption = qos->bcast.encryption;
memcpy(pdu.cp.bcode, qos->bcast.bcode, sizeof(pdu.cp.bcode));
pdu.cp.mse = qos->bcast.mse;
pdu.cp.timeout = cpu_to_le16(qos->bcast.timeout);
pdu.cp.num_bis = num_bis;
memcpy(pdu.bis, bis, num_bis);
pdu->handle = qos->bcast.big;
pdu->sync_handle = cpu_to_le16(sync_handle);
pdu->encryption = qos->bcast.encryption;
memcpy(pdu->bcode, qos->bcast.bcode, sizeof(pdu->bcode));
pdu->mse = qos->bcast.mse;
pdu->timeout = cpu_to_le16(qos->bcast.timeout);
pdu->num_bis = num_bis;
memcpy(pdu->bis, bis, num_bis);
return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
sizeof(pdu.cp) + num_bis, &pdu);
struct_size(pdu, bis, num_bis), pdu);
}
static void create_big_complete(struct hci_dev *hdev, void *data, int err)

View File

@ -149,8 +149,6 @@ void hci_discovery_set_state(struct hci_dev *hdev, int state)
{
int old_state = hdev->discovery.state;
BT_DBG("%s state %u -> %u", hdev->name, hdev->discovery.state, state);
if (old_state == state)
return;
@ -166,6 +164,13 @@ void hci_discovery_set_state(struct hci_dev *hdev, int state)
case DISCOVERY_STARTING:
break;
case DISCOVERY_FINDING:
/* If discovery was not started then it was initiated by the
* MGMT interface so no MGMT event shall be generated either
*/
if (old_state != DISCOVERY_STARTING) {
hdev->discovery.state = old_state;
return;
}
mgmt_discovering(hdev, 1);
break;
case DISCOVERY_RESOLVING:
@ -173,6 +178,8 @@ void hci_discovery_set_state(struct hci_dev *hdev, int state)
case DISCOVERY_STOPPING:
break;
}
bt_dev_dbg(hdev, "state %u -> %u", old_state, state);
}
void hci_inquiry_cache_flush(struct hci_dev *hdev)
@ -395,11 +402,6 @@ int hci_inquiry(void __user *arg)
goto done;
}
if (hdev->dev_type != HCI_PRIMARY) {
err = -EOPNOTSUPP;
goto done;
}
if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED)) {
err = -EOPNOTSUPP;
goto done;
@ -752,11 +754,6 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
goto done;
}
if (hdev->dev_type != HCI_PRIMARY) {
err = -EOPNOTSUPP;
goto done;
}
if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED)) {
err = -EOPNOTSUPP;
goto done;
@ -910,7 +907,7 @@ int hci_get_dev_info(void __user *arg)
strscpy(di.name, hdev->name, sizeof(di.name));
di.bdaddr = hdev->bdaddr;
di.type = (hdev->bus & 0x0f) | ((hdev->dev_type & 0x03) << 4);
di.type = (hdev->bus & 0x0f);
di.flags = flags;
di.pkt_type = hdev->pkt_type;
if (lmp_bredr_capable(hdev)) {
@ -1026,8 +1023,7 @@ static void hci_power_on(struct work_struct *work)
*/
if (hci_dev_test_flag(hdev, HCI_RFKILLED) ||
hci_dev_test_flag(hdev, HCI_UNCONFIGURED) ||
(hdev->dev_type == HCI_PRIMARY &&
!bacmp(&hdev->bdaddr, BDADDR_ANY) &&
(!bacmp(&hdev->bdaddr, BDADDR_ANY) &&
!bacmp(&hdev->static_addr, BDADDR_ANY))) {
hci_dev_clear_flag(hdev, HCI_AUTO_OFF);
hci_dev_do_close(hdev);
@ -1769,6 +1765,15 @@ struct adv_info *hci_add_adv_instance(struct hci_dev *hdev, u8 instance,
adv->pending = true;
adv->instance = instance;
/* If controller support only one set and the instance is set to
* 1 then there is no option other than using handle 0x00.
*/
if (hdev->le_num_of_adv_sets == 1 && instance == 1)
adv->handle = 0x00;
else
adv->handle = instance;
list_add(&adv->list, &hdev->adv_instances);
hdev->adv_instance_cnt++;
}
@ -2523,16 +2528,16 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
hdev->le_adv_channel_map = 0x07;
hdev->le_adv_min_interval = 0x0800;
hdev->le_adv_max_interval = 0x0800;
hdev->le_scan_interval = 0x0060;
hdev->le_scan_window = 0x0030;
hdev->le_scan_int_suspend = 0x0400;
hdev->le_scan_window_suspend = 0x0012;
hdev->le_scan_interval = DISCOV_LE_SCAN_INT_FAST;
hdev->le_scan_window = DISCOV_LE_SCAN_WIN_FAST;
hdev->le_scan_int_suspend = DISCOV_LE_SCAN_INT_SLOW1;
hdev->le_scan_window_suspend = DISCOV_LE_SCAN_WIN_SLOW1;
hdev->le_scan_int_discovery = DISCOV_LE_SCAN_INT;
hdev->le_scan_window_discovery = DISCOV_LE_SCAN_WIN;
hdev->le_scan_int_adv_monitor = 0x0060;
hdev->le_scan_window_adv_monitor = 0x0030;
hdev->le_scan_int_connect = 0x0060;
hdev->le_scan_window_connect = 0x0060;
hdev->le_scan_int_adv_monitor = DISCOV_LE_SCAN_INT_FAST;
hdev->le_scan_window_adv_monitor = DISCOV_LE_SCAN_WIN_FAST;
hdev->le_scan_int_connect = DISCOV_LE_SCAN_INT_CONN;
hdev->le_scan_window_connect = DISCOV_LE_SCAN_WIN_CONN;
hdev->le_conn_min_interval = 0x0018;
hdev->le_conn_max_interval = 0x0028;
hdev->le_conn_latency = 0x0000;
@ -2549,7 +2554,7 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
hdev->le_rx_def_phys = HCI_LE_SET_PHY_1M;
hdev->le_num_of_adv_sets = HCI_MAX_ADV_INSTANCES;
hdev->def_multi_adv_rotation_duration = HCI_DEFAULT_ADV_DURATION;
hdev->def_le_autoconnect_timeout = HCI_LE_AUTOCONN_TIMEOUT;
hdev->def_le_autoconnect_timeout = HCI_LE_CONN_TIMEOUT;
hdev->min_le_tx_power = HCI_TX_POWER_INVALID;
hdev->max_le_tx_power = HCI_TX_POWER_INVALID;
@ -2635,21 +2640,7 @@ int hci_register_dev(struct hci_dev *hdev)
if (!hdev->open || !hdev->close || !hdev->send)
return -EINVAL;
/* Do not allow HCI_AMP devices to register at index 0,
* so the index can be used as the AMP controller ID.
*/
switch (hdev->dev_type) {
case HCI_PRIMARY:
id = ida_alloc_max(&hci_index_ida, HCI_MAX_ID - 1, GFP_KERNEL);
break;
case HCI_AMP:
id = ida_alloc_range(&hci_index_ida, 1, HCI_MAX_ID - 1,
GFP_KERNEL);
break;
default:
return -EINVAL;
}
id = ida_alloc_max(&hci_index_ida, HCI_MAX_ID - 1, GFP_KERNEL);
if (id < 0)
return id;
@ -2701,12 +2692,10 @@ int hci_register_dev(struct hci_dev *hdev)
hci_dev_set_flag(hdev, HCI_SETUP);
hci_dev_set_flag(hdev, HCI_AUTO_OFF);
if (hdev->dev_type == HCI_PRIMARY) {
/* Assume BR/EDR support until proven otherwise (such as
* through reading supported features during init.
*/
hci_dev_set_flag(hdev, HCI_BREDR_ENABLED);
}
/* Assume BR/EDR support until proven otherwise (such as
* through reading supported features during init.
*/
hci_dev_set_flag(hdev, HCI_BREDR_ENABLED);
write_lock(&hci_dev_list_lock);
list_add(&hdev->list, &hci_dev_list);
@ -3242,17 +3231,7 @@ static void hci_queue_acl(struct hci_chan *chan, struct sk_buff_head *queue,
hci_skb_pkt_type(skb) = HCI_ACLDATA_PKT;
switch (hdev->dev_type) {
case HCI_PRIMARY:
hci_add_acl_hdr(skb, conn->handle, flags);
break;
case HCI_AMP:
hci_add_acl_hdr(skb, chan->handle, flags);
break;
default:
bt_dev_err(hdev, "unknown dev_type %d", hdev->dev_type);
return;
}
hci_add_acl_hdr(skb, conn->handle, flags);
list = skb_shinfo(skb)->frag_list;
if (!list) {
@ -3412,9 +3391,6 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
case ACL_LINK:
cnt = hdev->acl_cnt;
break;
case AMP_LINK:
cnt = hdev->block_cnt;
break;
case SCO_LINK:
case ESCO_LINK:
cnt = hdev->sco_cnt;
@ -3612,12 +3588,6 @@ static void hci_prio_recalculate(struct hci_dev *hdev, __u8 type)
}
static inline int __get_blocks(struct hci_dev *hdev, struct sk_buff *skb)
{
/* Calculate count of blocks used by this packet */
return DIV_ROUND_UP(skb->len - HCI_ACL_HDR_SIZE, hdev->block_len);
}
static void __check_timeout(struct hci_dev *hdev, unsigned int cnt, u8 type)
{
unsigned long last_tx;
@ -3731,81 +3701,15 @@ static void hci_sched_acl_pkt(struct hci_dev *hdev)
hci_prio_recalculate(hdev, ACL_LINK);
}
static void hci_sched_acl_blk(struct hci_dev *hdev)
{
unsigned int cnt = hdev->block_cnt;
struct hci_chan *chan;
struct sk_buff *skb;
int quote;
u8 type;
BT_DBG("%s", hdev->name);
if (hdev->dev_type == HCI_AMP)
type = AMP_LINK;
else
type = ACL_LINK;
__check_timeout(hdev, cnt, type);
while (hdev->block_cnt > 0 &&
(chan = hci_chan_sent(hdev, type, &quote))) {
u32 priority = (skb_peek(&chan->data_q))->priority;
while (quote > 0 && (skb = skb_peek(&chan->data_q))) {
int blocks;
BT_DBG("chan %p skb %p len %d priority %u", chan, skb,
skb->len, skb->priority);
/* Stop if priority has changed */
if (skb->priority < priority)
break;
skb = skb_dequeue(&chan->data_q);
blocks = __get_blocks(hdev, skb);
if (blocks > hdev->block_cnt)
return;
hci_conn_enter_active_mode(chan->conn,
bt_cb(skb)->force_active);
hci_send_frame(hdev, skb);
hdev->acl_last_tx = jiffies;
hdev->block_cnt -= blocks;
quote -= blocks;
chan->sent += blocks;
chan->conn->sent += blocks;
}
}
if (cnt != hdev->block_cnt)
hci_prio_recalculate(hdev, type);
}
static void hci_sched_acl(struct hci_dev *hdev)
{
BT_DBG("%s", hdev->name);
/* No ACL link over BR/EDR controller */
if (!hci_conn_num(hdev, ACL_LINK) && hdev->dev_type == HCI_PRIMARY)
if (!hci_conn_num(hdev, ACL_LINK))
return;
/* No AMP link over AMP controller */
if (!hci_conn_num(hdev, AMP_LINK) && hdev->dev_type == HCI_AMP)
return;
switch (hdev->flow_ctl_mode) {
case HCI_FLOW_CTL_MODE_PACKET_BASED:
hci_sched_acl_pkt(hdev);
break;
case HCI_FLOW_CTL_MODE_BLOCK_BASED:
hci_sched_acl_blk(hdev);
break;
}
hci_sched_acl_pkt(hdev);
}
static void hci_sched_le(struct hci_dev *hdev)

View File

@ -1,7 +1,7 @@
/*
BlueZ - Bluetooth protocol stack for Linux
Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved.
Copyright 2023 NXP
Copyright 2023-2024 NXP
Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
@ -913,21 +913,6 @@ static u8 hci_cc_read_local_ext_features(struct hci_dev *hdev, void *data,
return rp->status;
}
static u8 hci_cc_read_flow_control_mode(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
struct hci_rp_read_flow_control_mode *rp = data;
bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
if (rp->status)
return rp->status;
hdev->flow_ctl_mode = rp->mode;
return rp->status;
}
static u8 hci_cc_read_buffer_size(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
@ -954,6 +939,9 @@ static u8 hci_cc_read_buffer_size(struct hci_dev *hdev, void *data,
BT_DBG("%s acl mtu %d:%d sco mtu %d:%d", hdev->name, hdev->acl_mtu,
hdev->acl_pkts, hdev->sco_mtu, hdev->sco_pkts);
if (!hdev->acl_mtu || !hdev->acl_pkts)
return HCI_ERROR_INVALID_PARAMETERS;
return rp->status;
}
@ -1068,28 +1056,6 @@ static u8 hci_cc_write_page_scan_type(struct hci_dev *hdev, void *data,
return rp->status;
}
static u8 hci_cc_read_data_block_size(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
struct hci_rp_read_data_block_size *rp = data;
bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
if (rp->status)
return rp->status;
hdev->block_mtu = __le16_to_cpu(rp->max_acl_len);
hdev->block_len = __le16_to_cpu(rp->block_len);
hdev->num_blocks = __le16_to_cpu(rp->num_blocks);
hdev->block_cnt = hdev->num_blocks;
BT_DBG("%s blk mtu %d cnt %d len %d", hdev->name, hdev->block_mtu,
hdev->block_cnt, hdev->block_len);
return rp->status;
}
static u8 hci_cc_read_clock(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
@ -1124,30 +1090,6 @@ unlock:
return rp->status;
}
static u8 hci_cc_read_local_amp_info(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
struct hci_rp_read_local_amp_info *rp = data;
bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
if (rp->status)
return rp->status;
hdev->amp_status = rp->amp_status;
hdev->amp_total_bw = __le32_to_cpu(rp->total_bw);
hdev->amp_max_bw = __le32_to_cpu(rp->max_bw);
hdev->amp_min_latency = __le32_to_cpu(rp->min_latency);
hdev->amp_max_pdu = __le32_to_cpu(rp->max_pdu);
hdev->amp_type = rp->amp_type;
hdev->amp_pal_cap = __le16_to_cpu(rp->pal_cap);
hdev->amp_assoc_size = __le16_to_cpu(rp->max_assoc_size);
hdev->amp_be_flush_to = __le32_to_cpu(rp->be_flush_to);
hdev->amp_max_flush_to = __le32_to_cpu(rp->max_flush_to);
return rp->status;
}
static u8 hci_cc_read_inq_rsp_tx_power(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
@ -1263,6 +1205,9 @@ static u8 hci_cc_le_read_buffer_size(struct hci_dev *hdev, void *data,
BT_DBG("%s le mtu %d:%d", hdev->name, hdev->le_mtu, hdev->le_pkts);
if (hdev->le_mtu && hdev->le_mtu < HCI_MIN_LE_MTU)
return HCI_ERROR_INVALID_PARAMETERS;
return rp->status;
}
@ -1779,8 +1724,7 @@ static void le_set_scan_enable_complete(struct hci_dev *hdev, u8 enable)
hci_dev_set_flag(hdev, HCI_LE_SCAN);
if (hdev->le_scan_type == LE_SCAN_ACTIVE)
clear_pending_adv_report(hdev);
if (hci_dev_test_flag(hdev, HCI_MESH))
hci_discovery_set_state(hdev, DISCOVERY_FINDING);
hci_discovery_set_state(hdev, DISCOVERY_FINDING);
break;
case LE_SCAN_DISABLE:
@ -2342,8 +2286,8 @@ static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status)
if (!conn) {
conn = hci_conn_add_unset(hdev, ACL_LINK, &cp->bdaddr,
HCI_ROLE_MASTER);
if (!conn)
bt_dev_err(hdev, "no memory for new connection");
if (IS_ERR(conn))
bt_dev_err(hdev, "connection err: %ld", PTR_ERR(conn));
}
}
@ -3154,8 +3098,8 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
BDADDR_BREDR)) {
conn = hci_conn_add_unset(hdev, ev->link_type,
&ev->bdaddr, HCI_ROLE_SLAVE);
if (!conn) {
bt_dev_err(hdev, "no memory for new conn");
if (IS_ERR(conn)) {
bt_dev_err(hdev, "connection err: %ld", PTR_ERR(conn));
goto unlock;
}
} else {
@ -3343,8 +3287,8 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
if (!conn) {
conn = hci_conn_add_unset(hdev, ev->link_type, &ev->bdaddr,
HCI_ROLE_SLAVE);
if (!conn) {
bt_dev_err(hdev, "no memory for new connection");
if (IS_ERR(conn)) {
bt_dev_err(hdev, "connection err: %ld", PTR_ERR(conn));
goto unlock;
}
}
@ -3821,6 +3765,9 @@ static u8 hci_cc_le_read_buffer_size_v2(struct hci_dev *hdev, void *data,
BT_DBG("%s acl mtu %d:%d iso mtu %d:%d", hdev->name, hdev->acl_mtu,
hdev->acl_pkts, hdev->iso_mtu, hdev->iso_pkts);
if (hdev->le_mtu && hdev->le_mtu < HCI_MIN_LE_MTU)
return HCI_ERROR_INVALID_PARAMETERS;
return rp->status;
}
@ -4112,12 +4059,6 @@ static const struct hci_cc {
HCI_CC(HCI_OP_READ_PAGE_SCAN_TYPE, hci_cc_read_page_scan_type,
sizeof(struct hci_rp_read_page_scan_type)),
HCI_CC_STATUS(HCI_OP_WRITE_PAGE_SCAN_TYPE, hci_cc_write_page_scan_type),
HCI_CC(HCI_OP_READ_DATA_BLOCK_SIZE, hci_cc_read_data_block_size,
sizeof(struct hci_rp_read_data_block_size)),
HCI_CC(HCI_OP_READ_FLOW_CONTROL_MODE, hci_cc_read_flow_control_mode,
sizeof(struct hci_rp_read_flow_control_mode)),
HCI_CC(HCI_OP_READ_LOCAL_AMP_INFO, hci_cc_read_local_amp_info,
sizeof(struct hci_rp_read_local_amp_info)),
HCI_CC(HCI_OP_READ_CLOCK, hci_cc_read_clock,
sizeof(struct hci_rp_read_clock)),
HCI_CC(HCI_OP_READ_ENC_KEY_SIZE, hci_cc_read_enc_key_size,
@ -4308,7 +4249,7 @@ static void hci_cs_le_create_cis(struct hci_dev *hdev, u8 status)
hci_dev_lock(hdev);
/* Remove connection if command failed */
for (i = 0; cp->num_cis; cp->num_cis--, i++) {
for (i = 0; i < cp->num_cis; i++) {
struct hci_conn *conn;
u16 handle;
@ -4324,6 +4265,7 @@ static void hci_cs_le_create_cis(struct hci_dev *hdev, u8 status)
hci_conn_del(conn);
}
}
cp->num_cis = 0;
if (pending)
hci_le_create_cis_pending(hdev);
@ -4452,11 +4394,6 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
flex_array_size(ev, handles, ev->num)))
return;
if (hdev->flow_ctl_mode != HCI_FLOW_CTL_MODE_PACKET_BASED) {
bt_dev_err(hdev, "wrong event for mode %d", hdev->flow_ctl_mode);
return;
}
bt_dev_dbg(hdev, "num %d", ev->num);
for (i = 0; i < ev->num; i++) {
@ -4524,78 +4461,6 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
queue_work(hdev->workqueue, &hdev->tx_work);
}
static struct hci_conn *__hci_conn_lookup_handle(struct hci_dev *hdev,
__u16 handle)
{
struct hci_chan *chan;
switch (hdev->dev_type) {
case HCI_PRIMARY:
return hci_conn_hash_lookup_handle(hdev, handle);
case HCI_AMP:
chan = hci_chan_lookup_handle(hdev, handle);
if (chan)
return chan->conn;
break;
default:
bt_dev_err(hdev, "unknown dev_type %d", hdev->dev_type);
break;
}
return NULL;
}
static void hci_num_comp_blocks_evt(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
struct hci_ev_num_comp_blocks *ev = data;
int i;
if (!hci_ev_skb_pull(hdev, skb, HCI_EV_NUM_COMP_BLOCKS,
flex_array_size(ev, handles, ev->num_hndl)))
return;
if (hdev->flow_ctl_mode != HCI_FLOW_CTL_MODE_BLOCK_BASED) {
bt_dev_err(hdev, "wrong event for mode %d",
hdev->flow_ctl_mode);
return;
}
bt_dev_dbg(hdev, "num_blocks %d num_hndl %d", ev->num_blocks,
ev->num_hndl);
for (i = 0; i < ev->num_hndl; i++) {
struct hci_comp_blocks_info *info = &ev->handles[i];
struct hci_conn *conn = NULL;
__u16 handle, block_count;
handle = __le16_to_cpu(info->handle);
block_count = __le16_to_cpu(info->blocks);
conn = __hci_conn_lookup_handle(hdev, handle);
if (!conn)
continue;
conn->sent -= block_count;
switch (conn->type) {
case ACL_LINK:
case AMP_LINK:
hdev->block_cnt += block_count;
if (hdev->block_cnt > hdev->num_blocks)
hdev->block_cnt = hdev->num_blocks;
break;
default:
bt_dev_err(hdev, "unknown type %d conn %p",
conn->type, conn);
break;
}
}
queue_work(hdev->workqueue, &hdev->tx_work);
}
static void hci_mode_change_evt(struct hci_dev *hdev, void *data,
struct sk_buff *skb)
{
@ -5768,8 +5633,8 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
goto unlock;
conn = hci_conn_add_unset(hdev, LE_LINK, bdaddr, role);
if (!conn) {
bt_dev_err(hdev, "no memory for new connection");
if (IS_ERR(conn)) {
bt_dev_err(hdev, "connection err: %ld", PTR_ERR(conn));
goto unlock;
}
@ -6493,14 +6358,16 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
if (!(flags & HCI_PROTO_DEFER))
goto unlock;
/* Add connection to indicate PA sync event */
pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
HCI_ROLE_SLAVE);
if (IS_ERR(pa_sync))
goto unlock;
pa_sync->sync_handle = le16_to_cpu(ev->handle);
if (ev->status) {
/* Add connection to indicate the failed PA sync event */
pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
HCI_ROLE_SLAVE);
if (!pa_sync)
goto unlock;
set_bit(HCI_CONN_PA_SYNC_FAILED, &pa_sync->flags);
/* Notify iso layer */
@ -6517,6 +6384,7 @@ static void hci_le_per_adv_report_evt(struct hci_dev *hdev, void *data,
struct hci_ev_le_per_adv_report *ev = data;
int mask = hdev->link_mode;
__u8 flags = 0;
struct hci_conn *pa_sync;
bt_dev_dbg(hdev, "sync_handle 0x%4.4x", le16_to_cpu(ev->sync_handle));
@ -6524,8 +6392,28 @@ static void hci_le_per_adv_report_evt(struct hci_dev *hdev, void *data,
mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
if (!(mask & HCI_LM_ACCEPT))
hci_le_pa_term_sync(hdev, ev->sync_handle);
goto unlock;
if (!(flags & HCI_PROTO_DEFER))
goto unlock;
pa_sync = hci_conn_hash_lookup_pa_sync_handle
(hdev,
le16_to_cpu(ev->sync_handle));
if (!pa_sync)
goto unlock;
if (ev->data_status == LE_PA_DATA_COMPLETE &&
!test_and_set_bit(HCI_CONN_PA_SYNC, &pa_sync->flags)) {
/* Notify iso layer */
hci_connect_cfm(pa_sync, 0);
/* Notify MGMT layer */
mgmt_device_connected(hdev, pa_sync, NULL, 0);
}
unlock:
hci_dev_unlock(hdev);
}
@ -6898,7 +6786,7 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
if (!cis) {
cis = hci_conn_add(hdev, ISO_LINK, &acl->dst, HCI_ROLE_SLAVE,
cis_handle);
if (!cis) {
if (IS_ERR(cis)) {
hci_le_reject_cis(hdev, ev->cis_handle);
goto unlock;
}
@ -7007,7 +6895,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
if (!bis) {
bis = hci_conn_add(hdev, ISO_LINK, BDADDR_ANY,
HCI_ROLE_SLAVE, handle);
if (!bis)
if (IS_ERR(bis))
continue;
}
@ -7060,10 +6948,8 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
hci_dev_lock(hdev);
mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
if (!(mask & HCI_LM_ACCEPT)) {
hci_le_pa_term_sync(hdev, ev->sync_handle);
if (!(mask & HCI_LM_ACCEPT))
goto unlock;
}
if (!(flags & HCI_PROTO_DEFER))
goto unlock;
@ -7072,24 +6958,11 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
(hdev,
le16_to_cpu(ev->sync_handle));
if (pa_sync)
goto unlock;
/* Add connection to indicate the PA sync event */
pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
HCI_ROLE_SLAVE);
if (!pa_sync)
goto unlock;
pa_sync->sync_handle = le16_to_cpu(ev->sync_handle);
set_bit(HCI_CONN_PA_SYNC, &pa_sync->flags);
/* Notify iso layer */
hci_connect_cfm(pa_sync, 0x00);
/* Notify MGMT layer */
mgmt_device_connected(hdev, pa_sync, NULL, 0);
hci_connect_cfm(pa_sync, 0);
unlock:
hci_dev_unlock(hdev);
@ -7503,9 +7376,6 @@ static const struct hci_ev {
/* [0x3e = HCI_EV_LE_META] */
HCI_EV_REQ_VL(HCI_EV_LE_META, hci_le_meta_evt,
sizeof(struct hci_ev_le_meta), HCI_MAX_EVENT_SIZE),
/* [0x48 = HCI_EV_NUM_COMP_BLOCKS] */
HCI_EV(HCI_EV_NUM_COMP_BLOCKS, hci_num_comp_blocks_evt,
sizeof(struct hci_ev_num_comp_blocks)),
/* [0xff = HCI_EV_VENDOR] */
HCI_EV_VL(HCI_EV_VENDOR, msft_vendor_evt, 0, HCI_MAX_EVENT_SIZE),
};

View File

@ -29,10 +29,6 @@
#define hci_req_sync_lock(hdev) mutex_lock(&hdev->req_lock)
#define hci_req_sync_unlock(hdev) mutex_unlock(&hdev->req_lock)
#define HCI_REQ_DONE 0
#define HCI_REQ_PEND 1
#define HCI_REQ_CANCELED 2
struct hci_request {
struct hci_dev *hdev;
struct sk_buff_head cmd_q;

View File

@ -485,7 +485,7 @@ static struct sk_buff *create_monitor_event(struct hci_dev *hdev, int event)
return NULL;
ni = skb_put(skb, HCI_MON_NEW_INDEX_SIZE);
ni->type = hdev->dev_type;
ni->type = 0x00; /* Old hdev->dev_type */
ni->bus = hdev->bus;
bacpy(&ni->bdaddr, &hdev->bdaddr);
memcpy_and_pad(ni->name, sizeof(ni->name), hdev->name,
@ -1007,9 +1007,6 @@ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd,
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED))
return -EOPNOTSUPP;
if (hdev->dev_type != HCI_PRIMARY)
return -EOPNOTSUPP;
switch (cmd) {
case HCISETRAW:
if (!capable(CAP_NET_ADMIN))

View File

@ -1043,11 +1043,10 @@ static int hci_disable_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
struct hci_cp_ext_adv_set *set;
u8 data[sizeof(*cp) + sizeof(*set) * 1];
u8 size;
struct adv_info *adv = NULL;
/* If request specifies an instance that doesn't exist, fail */
if (instance > 0) {
struct adv_info *adv;
adv = hci_find_adv_instance(hdev, instance);
if (!adv)
return -EINVAL;
@ -1066,7 +1065,7 @@ static int hci_disable_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
cp->num_of_sets = !!instance;
cp->enable = 0x00;
set->handle = instance;
set->handle = adv ? adv->handle : instance;
size = sizeof(*cp) + sizeof(*set) * cp->num_of_sets;
@ -1235,31 +1234,27 @@ int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
static int hci_set_ext_scan_rsp_data_sync(struct hci_dev *hdev, u8 instance)
{
struct {
struct hci_cp_le_set_ext_scan_rsp_data cp;
u8 data[HCI_MAX_EXT_AD_LENGTH];
} pdu;
DEFINE_FLEX(struct hci_cp_le_set_ext_scan_rsp_data, pdu, data, length,
HCI_MAX_EXT_AD_LENGTH);
u8 len;
struct adv_info *adv = NULL;
int err;
memset(&pdu, 0, sizeof(pdu));
if (instance) {
adv = hci_find_adv_instance(hdev, instance);
if (!adv || !adv->scan_rsp_changed)
return 0;
}
len = eir_create_scan_rsp(hdev, instance, pdu.data);
len = eir_create_scan_rsp(hdev, instance, pdu->data);
pdu.cp.handle = instance;
pdu.cp.length = len;
pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
pdu->handle = adv ? adv->handle : instance;
pdu->length = len;
pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;
pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG;
err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA,
sizeof(pdu.cp) + len, &pdu.cp,
struct_size(pdu, data, len), pdu,
HCI_CMD_TIMEOUT);
if (err)
return err;
@ -1267,7 +1262,7 @@ static int hci_set_ext_scan_rsp_data_sync(struct hci_dev *hdev, u8 instance)
if (adv) {
adv->scan_rsp_changed = false;
} else {
memcpy(hdev->scan_rsp_data, pdu.data, len);
memcpy(hdev->scan_rsp_data, pdu->data, len);
hdev->scan_rsp_data_len = len;
}
@ -1335,7 +1330,7 @@ int hci_enable_ext_advertising_sync(struct hci_dev *hdev, u8 instance)
memset(set, 0, sizeof(*set));
set->handle = instance;
set->handle = adv ? adv->handle : instance;
/* Set duration per instance since controller is responsible for
* scheduling it.
@ -1411,29 +1406,25 @@ static int hci_set_per_adv_params_sync(struct hci_dev *hdev, u8 instance,
static int hci_set_per_adv_data_sync(struct hci_dev *hdev, u8 instance)
{
struct {
struct hci_cp_le_set_per_adv_data cp;
u8 data[HCI_MAX_PER_AD_LENGTH];
} pdu;
DEFINE_FLEX(struct hci_cp_le_set_per_adv_data, pdu, data, length,
HCI_MAX_PER_AD_LENGTH);
u8 len;
memset(&pdu, 0, sizeof(pdu));
struct adv_info *adv = NULL;
if (instance) {
struct adv_info *adv = hci_find_adv_instance(hdev, instance);
adv = hci_find_adv_instance(hdev, instance);
if (!adv || !adv->periodic)
return 0;
}
len = eir_create_per_adv_data(hdev, instance, pdu.data);
len = eir_create_per_adv_data(hdev, instance, pdu->data);
pdu.cp.length = len;
pdu.cp.handle = instance;
pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
pdu->length = len;
pdu->handle = adv ? adv->handle : instance;
pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;
return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_PER_ADV_DATA,
sizeof(pdu.cp) + len, &pdu,
struct_size(pdu, data, len), pdu,
HCI_CMD_TIMEOUT);
}
@ -1727,31 +1718,27 @@ int hci_le_terminate_big_sync(struct hci_dev *hdev, u8 handle, u8 reason)
static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
{
struct {
struct hci_cp_le_set_ext_adv_data cp;
u8 data[HCI_MAX_EXT_AD_LENGTH];
} pdu;
DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length,
HCI_MAX_EXT_AD_LENGTH);
u8 len;
struct adv_info *adv = NULL;
int err;
memset(&pdu, 0, sizeof(pdu));
if (instance) {
adv = hci_find_adv_instance(hdev, instance);
if (!adv || !adv->adv_data_changed)
return 0;
}
len = eir_create_adv_data(hdev, instance, pdu.data);
len = eir_create_adv_data(hdev, instance, pdu->data);
pdu.cp.length = len;
pdu.cp.handle = instance;
pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
pdu->length = len;
pdu->handle = adv ? adv->handle : instance;
pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;
pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG;
err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA,
sizeof(pdu.cp) + len, &pdu.cp,
struct_size(pdu, data, len), pdu,
HCI_CMD_TIMEOUT);
if (err)
return err;
@ -1760,7 +1747,7 @@ static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
if (adv) {
adv->adv_data_changed = false;
} else {
memcpy(hdev->adv_data, pdu.data, len);
memcpy(hdev->adv_data, pdu->data, len);
hdev->adv_data_len = len;
}
@ -3523,10 +3510,6 @@ static int hci_unconf_init_sync(struct hci_dev *hdev)
/* Read Local Supported Features. */
static int hci_read_local_features_sync(struct hci_dev *hdev)
{
/* Not all AMP controllers support this command */
if (hdev->dev_type == HCI_AMP && !(hdev->commands[14] & 0x20))
return 0;
return __hci_cmd_sync_status(hdev, HCI_OP_READ_LOCAL_FEATURES,
0, NULL, HCI_CMD_TIMEOUT);
}
@ -3561,51 +3544,6 @@ static int hci_read_local_cmds_sync(struct hci_dev *hdev)
return 0;
}
/* Read Local AMP Info */
static int hci_read_local_amp_info_sync(struct hci_dev *hdev)
{
return __hci_cmd_sync_status(hdev, HCI_OP_READ_LOCAL_AMP_INFO,
0, NULL, HCI_CMD_TIMEOUT);
}
/* Read Data Blk size */
static int hci_read_data_block_size_sync(struct hci_dev *hdev)
{
return __hci_cmd_sync_status(hdev, HCI_OP_READ_DATA_BLOCK_SIZE,
0, NULL, HCI_CMD_TIMEOUT);
}
/* Read Flow Control Mode */
static int hci_read_flow_control_mode_sync(struct hci_dev *hdev)
{
return __hci_cmd_sync_status(hdev, HCI_OP_READ_FLOW_CONTROL_MODE,
0, NULL, HCI_CMD_TIMEOUT);
}
/* Read Location Data */
static int hci_read_location_data_sync(struct hci_dev *hdev)
{
return __hci_cmd_sync_status(hdev, HCI_OP_READ_LOCATION_DATA,
0, NULL, HCI_CMD_TIMEOUT);
}
/* AMP Controller init stage 1 command sequence */
static const struct hci_init_stage amp_init1[] = {
/* HCI_OP_READ_LOCAL_VERSION */
HCI_INIT(hci_read_local_version_sync),
/* HCI_OP_READ_LOCAL_COMMANDS */
HCI_INIT(hci_read_local_cmds_sync),
/* HCI_OP_READ_LOCAL_AMP_INFO */
HCI_INIT(hci_read_local_amp_info_sync),
/* HCI_OP_READ_DATA_BLOCK_SIZE */
HCI_INIT(hci_read_data_block_size_sync),
/* HCI_OP_READ_FLOW_CONTROL_MODE */
HCI_INIT(hci_read_flow_control_mode_sync),
/* HCI_OP_READ_LOCATION_DATA */
HCI_INIT(hci_read_location_data_sync),
{}
};
static int hci_init1_sync(struct hci_dev *hdev)
{
int err;
@ -3619,28 +3557,9 @@ static int hci_init1_sync(struct hci_dev *hdev)
return err;
}
switch (hdev->dev_type) {
case HCI_PRIMARY:
hdev->flow_ctl_mode = HCI_FLOW_CTL_MODE_PACKET_BASED;
return hci_init_stage_sync(hdev, br_init1);
case HCI_AMP:
hdev->flow_ctl_mode = HCI_FLOW_CTL_MODE_BLOCK_BASED;
return hci_init_stage_sync(hdev, amp_init1);
default:
bt_dev_err(hdev, "Unknown device type %d", hdev->dev_type);
break;
}
return 0;
return hci_init_stage_sync(hdev, br_init1);
}
/* AMP Controller init stage 2 command sequence */
static const struct hci_init_stage amp_init2[] = {
/* HCI_OP_READ_LOCAL_FEATURES */
HCI_INIT(hci_read_local_features_sync),
{}
};
/* Read Buffer Size (ACL mtu, max pkt, etc.) */
static int hci_read_buffer_size_sync(struct hci_dev *hdev)
{
@ -3898,9 +3817,6 @@ static int hci_init2_sync(struct hci_dev *hdev)
bt_dev_dbg(hdev, "");
if (hdev->dev_type == HCI_AMP)
return hci_init_stage_sync(hdev, amp_init2);
err = hci_init_stage_sync(hdev, hci_init2);
if (err)
return err;
@ -4728,13 +4644,6 @@ static int hci_init_sync(struct hci_dev *hdev)
if (err < 0)
return err;
/* HCI_PRIMARY covers both single-mode LE, BR/EDR and dual-mode
* BR/EDR/LE type controllers. AMP controllers only need the
* first two stages of init.
*/
if (hdev->dev_type != HCI_PRIMARY)
return 0;
err = hci_init3_sync(hdev);
if (err < 0)
return err;
@ -4963,12 +4872,8 @@ int hci_dev_open_sync(struct hci_dev *hdev)
* In case of user channel usage, it is not important
* if a public address or static random address is
* available.
*
* This check is only valid for BR/EDR controllers
* since AMP controllers do not have an address.
*/
if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
hdev->dev_type == HCI_PRIMARY &&
!bacmp(&hdev->bdaddr, BDADDR_ANY) &&
!bacmp(&hdev->static_addr, BDADDR_ANY)) {
ret = -EADDRNOTAVAIL;
@ -5003,8 +4908,7 @@ int hci_dev_open_sync(struct hci_dev *hdev)
!hci_dev_test_flag(hdev, HCI_CONFIG) &&
!hci_dev_test_flag(hdev, HCI_UNCONFIGURED) &&
!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
hci_dev_test_flag(hdev, HCI_MGMT) &&
hdev->dev_type == HCI_PRIMARY) {
hci_dev_test_flag(hdev, HCI_MGMT)) {
ret = hci_powered_update_sync(hdev);
mgmt_power_on(hdev, ret);
}
@ -5149,8 +5053,7 @@ int hci_dev_close_sync(struct hci_dev *hdev)
auto_off = hci_dev_test_and_clear_flag(hdev, HCI_AUTO_OFF);
if (!auto_off && hdev->dev_type == HCI_PRIMARY &&
!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
if (!auto_off && !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
hci_dev_test_flag(hdev, HCI_MGMT))
__mgmt_power_off(hdev);
@ -5212,9 +5115,6 @@ int hci_dev_close_sync(struct hci_dev *hdev)
hdev->flags &= BIT(HCI_RAW);
hci_dev_clear_volatile_flags(hdev);
/* Controller radio is available but is currently powered down */
hdev->amp_status = AMP_STATUS_POWERED_DOWN;
memset(hdev->eir, 0, sizeof(hdev->eir));
memset(hdev->dev_class, 0, sizeof(hdev->dev_class));
bacpy(&hdev->random_addr, BDADDR_ANY);
@ -5251,8 +5151,7 @@ static int hci_power_on_sync(struct hci_dev *hdev)
*/
if (hci_dev_test_flag(hdev, HCI_RFKILLED) ||
hci_dev_test_flag(hdev, HCI_UNCONFIGURED) ||
(hdev->dev_type == HCI_PRIMARY &&
!bacmp(&hdev->bdaddr, BDADDR_ANY) &&
(!bacmp(&hdev->bdaddr, BDADDR_ANY) &&
!bacmp(&hdev->static_addr, BDADDR_ANY))) {
hci_dev_clear_flag(hdev, HCI_AUTO_OFF);
hci_dev_close_sync(hdev);
@ -5354,27 +5253,11 @@ int hci_stop_discovery_sync(struct hci_dev *hdev)
return 0;
}
static int hci_disconnect_phy_link_sync(struct hci_dev *hdev, u16 handle,
u8 reason)
{
struct hci_cp_disconn_phy_link cp;
memset(&cp, 0, sizeof(cp));
cp.phy_handle = HCI_PHY_HANDLE(handle);
cp.reason = reason;
return __hci_cmd_sync_status(hdev, HCI_OP_DISCONN_PHY_LINK,
sizeof(cp), &cp, HCI_CMD_TIMEOUT);
}
static int hci_disconnect_sync(struct hci_dev *hdev, struct hci_conn *conn,
u8 reason)
{
struct hci_cp_disconnect cp;
if (conn->type == AMP_LINK)
return hci_disconnect_phy_link_sync(hdev, conn->handle, reason);
if (test_bit(HCI_CONN_BIG_CREATED, &conn->flags)) {
/* This is a BIS connection, hci_conn_del will
* do the necessary cleanup.
@ -6493,10 +6376,8 @@ done:
int hci_le_create_cis_sync(struct hci_dev *hdev)
{
struct {
struct hci_cp_le_create_cis cp;
struct hci_cis cis[0x1f];
} cmd;
DEFINE_FLEX(struct hci_cp_le_create_cis, cmd, cis, num_cis, 0x1f);
size_t aux_num_cis = 0;
struct hci_conn *conn;
u8 cig = BT_ISO_QOS_CIG_UNSET;
@ -6523,8 +6404,6 @@ int hci_le_create_cis_sync(struct hci_dev *hdev)
* remains pending.
*/
memset(&cmd, 0, sizeof(cmd));
hci_dev_lock(hdev);
rcu_read_lock();
@ -6561,7 +6440,7 @@ int hci_le_create_cis_sync(struct hci_dev *hdev)
goto done;
list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
struct hci_cis *cis = &cmd.cis[cmd.cp.num_cis];
struct hci_cis *cis = &cmd->cis[aux_num_cis];
if (hci_conn_check_create_cis(conn) ||
conn->iso_qos.ucast.cig != cig)
@ -6570,25 +6449,25 @@ int hci_le_create_cis_sync(struct hci_dev *hdev)
set_bit(HCI_CONN_CREATE_CIS, &conn->flags);
cis->acl_handle = cpu_to_le16(conn->parent->handle);
cis->cis_handle = cpu_to_le16(conn->handle);
cmd.cp.num_cis++;
aux_num_cis++;
if (cmd.cp.num_cis >= ARRAY_SIZE(cmd.cis))
if (aux_num_cis >= cmd->num_cis)
break;
}
cmd->num_cis = aux_num_cis;
done:
rcu_read_unlock();
hci_dev_unlock(hdev);
if (!cmd.cp.num_cis)
if (!aux_num_cis)
return 0;
/* Wait for HCI_LE_CIS_Established */
return __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_CREATE_CIS,
sizeof(cmd.cp) + sizeof(cmd.cis[0]) *
cmd.cp.num_cis, &cmd,
HCI_EVT_LE_CIS_ESTABLISHED,
struct_size(cmd, cis, cmd->num_cis),
cmd, HCI_EVT_LE_CIS_ESTABLISHED,
conn->conn_timeout, NULL);
}

View File

@ -54,7 +54,6 @@ static void iso_sock_kill(struct sock *sk);
enum {
BT_SK_BIG_SYNC,
BT_SK_PA_SYNC,
BT_SK_PA_SYNC_TERM,
};
struct iso_pinfo {
@ -81,12 +80,14 @@ static bool check_ucast_qos(struct bt_iso_qos *qos);
static bool check_bcast_qos(struct bt_iso_qos *qos);
static bool iso_match_sid(struct sock *sk, void *data);
static bool iso_match_sync_handle(struct sock *sk, void *data);
static bool iso_match_sync_handle_pa_report(struct sock *sk, void *data);
static void iso_sock_disconn(struct sock *sk);
typedef bool (*iso_sock_match_t)(struct sock *sk, void *data);
static struct sock *iso_get_sock_listen(bdaddr_t *src, bdaddr_t *dst,
iso_sock_match_t match, void *data);
static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
enum bt_sock_state state,
iso_sock_match_t match, void *data);
/* ---- ISO timers ---- */
#define ISO_CONN_TIMEOUT (HZ * 40)
@ -196,21 +197,10 @@ static void iso_chan_del(struct sock *sk, int err)
sock_set_flag(sk, SOCK_ZAPPED);
}
static bool iso_match_conn_sync_handle(struct sock *sk, void *data)
{
struct hci_conn *hcon = data;
if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags))
return false;
return hcon->sync_handle == iso_pi(sk)->sync_handle;
}
static void iso_conn_del(struct hci_conn *hcon, int err)
{
struct iso_conn *conn = hcon->iso_data;
struct sock *sk;
struct sock *parent;
if (!conn)
return;
@ -226,25 +216,6 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
if (sk) {
lock_sock(sk);
/* While a PA sync hcon is in the process of closing,
* mark parent socket with a flag, so that any residual
* BIGInfo adv reports that arrive before PA sync is
* terminated are not processed anymore.
*/
if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_conn_sync_handle,
hcon);
if (parent) {
set_bit(BT_SK_PA_SYNC_TERM,
&iso_pi(parent)->flags);
sock_put(parent);
}
}
iso_sock_clear_timer(sk);
iso_chan_del(sk, err);
release_sock(sk);
@ -581,22 +552,23 @@ static struct sock *__iso_get_sock_listen_by_sid(bdaddr_t *ba, bdaddr_t *bc,
return NULL;
}
/* Find socket listening:
/* Find socket in given state:
* source bdaddr (Unicast)
* destination bdaddr (Broadcast only)
* match func - pass NULL to ignore
* match func data - pass -1 to ignore
* Returns closest match.
*/
static struct sock *iso_get_sock_listen(bdaddr_t *src, bdaddr_t *dst,
iso_sock_match_t match, void *data)
static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
enum bt_sock_state state,
iso_sock_match_t match, void *data)
{
struct sock *sk = NULL, *sk1 = NULL;
read_lock(&iso_sk_list.lock);
sk_for_each(sk, &iso_sk_list.head) {
if (sk->sk_state != BT_LISTEN)
if (sk->sk_state != state)
continue;
/* Match Broadcast destination */
@ -857,6 +829,7 @@ static struct sock *iso_sock_alloc(struct net *net, struct socket *sock,
iso_pi(sk)->src_type = BDADDR_LE_PUBLIC;
iso_pi(sk)->qos = default_qos;
iso_pi(sk)->sync_handle = -1;
bt_sock_link(&iso_sk_list, sk);
return sk;
@ -904,7 +877,6 @@ static int iso_sock_bind_bc(struct socket *sock, struct sockaddr *addr,
return -EINVAL;
iso_pi(sk)->dst_type = sa->iso_bc->bc_bdaddr_type;
iso_pi(sk)->sync_handle = -1;
if (sa->iso_bc->bc_sid > 0x0f)
return -EINVAL;
@ -981,7 +953,8 @@ static int iso_sock_bind(struct socket *sock, struct sockaddr *addr,
/* Allow the user to bind a PA sync socket to a number
* of BISes to sync to.
*/
if (sk->sk_state == BT_CONNECT2 &&
if ((sk->sk_state == BT_CONNECT2 ||
sk->sk_state == BT_CONNECTED) &&
test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
err = iso_sock_bind_pa_sk(sk, sa, addr_len);
goto done;
@ -1285,7 +1258,7 @@ static int iso_sock_sendmsg(struct socket *sock, struct msghdr *msg,
return -ENOTCONN;
}
mtu = iso_pi(sk)->conn->hcon->hdev->iso_mtu;
mtu = iso_pi(sk)->conn->hcon->mtu;
release_sock(sk);
@ -1393,6 +1366,16 @@ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
}
release_sock(sk);
return 0;
case BT_CONNECTED:
if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
iso_conn_big_sync(sk);
sk->sk_state = BT_LISTEN;
release_sock(sk);
return 0;
}
release_sock(sk);
break;
case BT_CONNECT:
release_sock(sk);
return iso_connect_cis(sk);
@ -1538,7 +1521,9 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
case BT_ISO_QOS:
if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND &&
sk->sk_state != BT_CONNECT2) {
sk->sk_state != BT_CONNECT2 &&
(!test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags) ||
sk->sk_state != BT_CONNECTED)) {
err = -EINVAL;
break;
}
@ -1759,7 +1744,7 @@ static void iso_conn_ready(struct iso_conn *conn)
struct sock *sk = conn->sk;
struct hci_ev_le_big_sync_estabilished *ev = NULL;
struct hci_ev_le_pa_sync_established *ev2 = NULL;
struct hci_evt_le_big_info_adv_report *ev3 = NULL;
struct hci_ev_le_per_adv_report *ev3 = NULL;
struct hci_conn *hcon;
BT_DBG("conn %p", conn);
@ -1777,32 +1762,37 @@ static void iso_conn_ready(struct iso_conn *conn)
HCI_EVT_LE_BIG_SYNC_ESTABILISHED);
/* Get reference to PA sync parent socket, if it exists */
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_pa_sync_flag, NULL);
parent = iso_get_sock(&hcon->src, &hcon->dst,
BT_LISTEN,
iso_match_pa_sync_flag,
NULL);
if (!parent && ev)
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_big, ev);
parent = iso_get_sock(&hcon->src,
&hcon->dst,
BT_LISTEN,
iso_match_big, ev);
} else if (test_bit(HCI_CONN_PA_SYNC_FAILED, &hcon->flags)) {
ev2 = hci_recv_event_data(hcon->hdev,
HCI_EV_LE_PA_SYNC_ESTABLISHED);
if (ev2)
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_sid, ev2);
parent = iso_get_sock(&hcon->src,
&hcon->dst,
BT_LISTEN,
iso_match_sid, ev2);
} else if (test_bit(HCI_CONN_PA_SYNC, &hcon->flags)) {
ev3 = hci_recv_event_data(hcon->hdev,
HCI_EVT_LE_BIG_INFO_ADV_REPORT);
HCI_EV_LE_PER_ADV_REPORT);
if (ev3)
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_sync_handle, ev3);
parent = iso_get_sock(&hcon->src,
&hcon->dst,
BT_LISTEN,
iso_match_sync_handle_pa_report,
ev3);
}
if (!parent)
parent = iso_get_sock_listen(&hcon->src,
BDADDR_ANY, NULL, NULL);
parent = iso_get_sock(&hcon->src, BDADDR_ANY,
BT_LISTEN, NULL, NULL);
if (!parent)
return;
@ -1839,7 +1829,6 @@ static void iso_conn_ready(struct iso_conn *conn)
if (ev3) {
iso_pi(sk)->qos = iso_pi(parent)->qos;
iso_pi(sk)->qos.bcast.encryption = ev3->encryption;
hcon->iso_qos = iso_pi(sk)->qos;
iso_pi(sk)->bc_num_bis = iso_pi(parent)->bc_num_bis;
memcpy(iso_pi(sk)->bc_bis, iso_pi(parent)->bc_bis, ISO_MAX_NUM_BIS);
@ -1923,8 +1912,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
*/
ev1 = hci_recv_event_data(hdev, HCI_EV_LE_PA_SYNC_ESTABLISHED);
if (ev1) {
sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr, iso_match_sid,
ev1);
sk = iso_get_sock(&hdev->bdaddr, bdaddr, BT_LISTEN,
iso_match_sid, ev1);
if (sk && !ev1->status)
iso_pi(sk)->sync_handle = le16_to_cpu(ev1->handle);
@ -1933,26 +1922,29 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
ev2 = hci_recv_event_data(hdev, HCI_EVT_LE_BIG_INFO_ADV_REPORT);
if (ev2) {
/* Try to get PA sync listening socket, if it exists */
sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr,
iso_match_pa_sync_flag, NULL);
if (!sk) {
sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr,
iso_match_sync_handle, ev2);
/* If PA Sync is in process of terminating,
* do not handle any more BIGInfo adv reports.
*/
if (sk && test_bit(BT_SK_PA_SYNC_TERM,
&iso_pi(sk)->flags))
return 0;
/* Check if BIGInfo report has already been handled */
sk = iso_get_sock(&hdev->bdaddr, bdaddr, BT_CONNECTED,
iso_match_sync_handle, ev2);
if (sk) {
sock_put(sk);
sk = NULL;
goto done;
}
/* Try to get PA sync socket, if it exists */
sk = iso_get_sock(&hdev->bdaddr, bdaddr, BT_CONNECT2,
iso_match_sync_handle, ev2);
if (!sk)
sk = iso_get_sock(&hdev->bdaddr, bdaddr,
BT_LISTEN,
iso_match_sync_handle,
ev2);
if (sk) {
int err;
iso_pi(sk)->qos.bcast.encryption = ev2->encryption;
if (ev2->num_bis < iso_pi(sk)->bc_num_bis)
iso_pi(sk)->bc_num_bis = ev2->num_bis;
@ -1971,6 +1963,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
}
}
}
goto done;
}
ev3 = hci_recv_event_data(hdev, HCI_EV_LE_PER_ADV_REPORT);
@ -1979,8 +1973,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
u8 *base;
struct hci_conn *hcon;
sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr,
iso_match_sync_handle_pa_report, ev3);
sk = iso_get_sock(&hdev->bdaddr, bdaddr, BT_LISTEN,
iso_match_sync_handle_pa_report, ev3);
if (!sk)
goto done;
@ -2029,7 +2023,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
hcon->le_per_adv_data_len = 0;
}
} else {
sk = iso_get_sock_listen(&hdev->bdaddr, BDADDR_ANY, NULL, NULL);
sk = iso_get_sock(&hdev->bdaddr, BDADDR_ANY,
BT_LISTEN, NULL, NULL);
}
done:

View File

@ -457,6 +457,9 @@ struct l2cap_chan *l2cap_chan_create(void)
/* Set default lock nesting level */
atomic_set(&chan->nesting, L2CAP_NESTING_NORMAL);
/* Available receive buffer space is initially unknown */
chan->rx_avail = -1;
write_lock(&chan_list_lock);
list_add(&chan->global_l, &chan_list);
write_unlock(&chan_list_lock);
@ -538,6 +541,28 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan)
}
EXPORT_SYMBOL_GPL(l2cap_chan_set_defaults);
static __u16 l2cap_le_rx_credits(struct l2cap_chan *chan)
{
size_t sdu_len = chan->sdu ? chan->sdu->len : 0;
if (chan->mps == 0)
return 0;
/* If we don't know the available space in the receiver buffer, give
* enough credits for a full packet.
*/
if (chan->rx_avail == -1)
return (chan->imtu / chan->mps) + 1;
/* If we know how much space is available in the receive buffer, give
* out as many credits as would fill the buffer.
*/
if (chan->rx_avail <= sdu_len)
return 0;
return DIV_ROUND_UP(chan->rx_avail - sdu_len, chan->mps);
}
static void l2cap_le_flowctl_init(struct l2cap_chan *chan, u16 tx_credits)
{
chan->sdu = NULL;
@ -546,8 +571,7 @@ static void l2cap_le_flowctl_init(struct l2cap_chan *chan, u16 tx_credits)
chan->tx_credits = tx_credits;
/* Derive MPS from connection MTU to stop HCI fragmentation */
chan->mps = min_t(u16, chan->imtu, chan->conn->mtu - L2CAP_HDR_SIZE);
/* Give enough credits for a full packet */
chan->rx_credits = (chan->imtu / chan->mps) + 1;
chan->rx_credits = l2cap_le_rx_credits(chan);
skb_queue_head_init(&chan->tx_q);
}
@ -559,7 +583,7 @@ static void l2cap_ecred_init(struct l2cap_chan *chan, u16 tx_credits)
/* L2CAP implementations shall support a minimum MPS of 64 octets */
if (chan->mps < L2CAP_ECRED_MIN_MPS) {
chan->mps = L2CAP_ECRED_MIN_MPS;
chan->rx_credits = (chan->imtu / chan->mps) + 1;
chan->rx_credits = l2cap_le_rx_credits(chan);
}
}
@ -1260,7 +1284,7 @@ static void l2cap_le_connect(struct l2cap_chan *chan)
struct l2cap_ecred_conn_data {
struct {
struct l2cap_ecred_conn_req req;
struct l2cap_ecred_conn_req_hdr req;
__le16 scid[5];
} __packed pdu;
struct l2cap_chan *chan;
@ -3740,7 +3764,7 @@ static void l2cap_ecred_list_defer(struct l2cap_chan *chan, void *data)
struct l2cap_ecred_rsp_data {
struct {
struct l2cap_ecred_conn_rsp rsp;
struct l2cap_ecred_conn_rsp_hdr rsp;
__le16 scid[L2CAP_ECRED_MAX_CID];
} __packed pdu;
int count;
@ -3749,6 +3773,8 @@ struct l2cap_ecred_rsp_data {
static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
{
struct l2cap_ecred_rsp_data *rsp = data;
struct l2cap_ecred_conn_rsp *rsp_flex =
container_of(&rsp->pdu.rsp, struct l2cap_ecred_conn_rsp, hdr);
if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
return;
@ -3758,7 +3784,7 @@ static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
/* Include all channels pending with the same ident */
if (!rsp->pdu.rsp.result)
rsp->pdu.rsp.dcid[rsp->count++] = cpu_to_le16(chan->scid);
rsp_flex->dcid[rsp->count++] = cpu_to_le16(chan->scid);
else
l2cap_chan_del(chan, ECONNRESET);
}
@ -3906,7 +3932,7 @@ static inline int l2cap_command_rej(struct l2cap_conn *conn,
}
static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
u8 *data, u8 rsp_code, u8 amp_id)
u8 *data, u8 rsp_code)
{
struct l2cap_conn_req *req = (struct l2cap_conn_req *) data;
struct l2cap_conn_rsp rsp;
@ -3985,17 +4011,8 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
status = L2CAP_CS_AUTHOR_PEND;
chan->ops->defer(chan);
} else {
/* Force pending result for AMP controllers.
* The connection will succeed after the
* physical link is up.
*/
if (amp_id == AMP_ID_BREDR) {
l2cap_state_change(chan, BT_CONFIG);
result = L2CAP_CR_SUCCESS;
} else {
l2cap_state_change(chan, BT_CONNECT2);
result = L2CAP_CR_PEND;
}
l2cap_state_change(chan, BT_CONNECT2);
result = L2CAP_CR_PEND;
status = L2CAP_CS_NO_INFO;
}
} else {
@ -4060,7 +4077,7 @@ static int l2cap_connect_req(struct l2cap_conn *conn,
mgmt_device_connected(hdev, hcon, NULL, 0);
hci_dev_unlock(hdev);
l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP, 0);
l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP);
return 0;
}
@ -4996,10 +5013,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
u8 *data)
{
struct l2cap_ecred_conn_req *req = (void *) data;
struct {
struct l2cap_ecred_conn_rsp rsp;
__le16 dcid[L2CAP_ECRED_MAX_CID];
} __packed pdu;
DEFINE_RAW_FLEX(struct l2cap_ecred_conn_rsp, pdu, dcid, L2CAP_ECRED_MAX_CID);
struct l2cap_chan *chan, *pchan;
u16 mtu, mps;
__le16 psm;
@ -5018,7 +5032,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
cmd_len -= sizeof(*req);
num_scid = cmd_len / sizeof(u16);
if (num_scid > ARRAY_SIZE(pdu.dcid)) {
if (num_scid > L2CAP_ECRED_MAX_CID) {
result = L2CAP_CR_LE_INVALID_PARAMS;
goto response;
}
@ -5047,7 +5061,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);
memset(&pdu, 0, sizeof(pdu));
memset(pdu, 0, sizeof(*pdu));
/* Check if we have socket listening on psm */
pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,
@ -5073,8 +5087,8 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
BT_DBG("scid[%d] 0x%4.4x", i, scid);
pdu.dcid[i] = 0x0000;
len += sizeof(*pdu.dcid);
pdu->dcid[i] = 0x0000;
len += sizeof(*pdu->dcid);
/* Check for valid dynamic CID range */
if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {
@ -5108,13 +5122,13 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
l2cap_ecred_init(chan, __le16_to_cpu(req->credits));
/* Init response */
if (!pdu.rsp.credits) {
pdu.rsp.mtu = cpu_to_le16(chan->imtu);
pdu.rsp.mps = cpu_to_le16(chan->mps);
pdu.rsp.credits = cpu_to_le16(chan->rx_credits);
if (!pdu->credits) {
pdu->mtu = cpu_to_le16(chan->imtu);
pdu->mps = cpu_to_le16(chan->mps);
pdu->credits = cpu_to_le16(chan->rx_credits);
}
pdu.dcid[i] = cpu_to_le16(chan->scid);
pdu->dcid[i] = cpu_to_le16(chan->scid);
__set_chan_timer(chan, chan->ops->get_sndtimeo(chan));
@ -5136,13 +5150,13 @@ unlock:
l2cap_chan_put(pchan);
response:
pdu.rsp.result = cpu_to_le16(result);
pdu->result = cpu_to_le16(result);
if (defer)
return 0;
l2cap_send_cmd(conn, cmd->ident, L2CAP_ECRED_CONN_RSP,
sizeof(pdu.rsp) + len, &pdu);
sizeof(*pdu) + len, pdu);
return 0;
}
@ -6241,7 +6255,7 @@ static int l2cap_finish_move(struct l2cap_chan *chan)
BT_DBG("chan %p", chan);
chan->rx_state = L2CAP_RX_STATE_RECV;
chan->conn->mtu = chan->conn->hcon->hdev->acl_mtu;
chan->conn->mtu = chan->conn->hcon->mtu;
return l2cap_resegment(chan);
}
@ -6308,7 +6322,7 @@ static int l2cap_rx_state_wait_f(struct l2cap_chan *chan,
*/
chan->next_tx_seq = control->reqseq;
chan->unacked_frames = 0;
chan->conn->mtu = chan->conn->hcon->hdev->acl_mtu;
chan->conn->mtu = chan->conn->hcon->mtu;
err = l2cap_resegment(chan);
@ -6513,9 +6527,7 @@ static void l2cap_chan_le_send_credits(struct l2cap_chan *chan)
{
struct l2cap_conn *conn = chan->conn;
struct l2cap_le_credits pkt;
u16 return_credits;
return_credits = (chan->imtu / chan->mps) + 1;
u16 return_credits = l2cap_le_rx_credits(chan);
if (chan->rx_credits >= return_credits)
return;
@ -6534,6 +6546,19 @@ static void l2cap_chan_le_send_credits(struct l2cap_chan *chan)
l2cap_send_cmd(conn, chan->ident, L2CAP_LE_CREDITS, sizeof(pkt), &pkt);
}
void l2cap_chan_rx_avail(struct l2cap_chan *chan, ssize_t rx_avail)
{
if (chan->rx_avail == rx_avail)
return;
BT_DBG("chan %p has %zd bytes avail for rx", chan, rx_avail);
chan->rx_avail = rx_avail;
if (chan->state == BT_CONNECTED)
l2cap_chan_le_send_credits(chan);
}
static int l2cap_ecred_recv(struct l2cap_chan *chan, struct sk_buff *skb)
{
int err;
@ -6543,6 +6568,12 @@ static int l2cap_ecred_recv(struct l2cap_chan *chan, struct sk_buff *skb)
/* Wait recv to confirm reception before updating the credits */
err = chan->ops->recv(chan, skb);
if (err < 0 && chan->rx_avail != -1) {
BT_ERR("Queueing received LE L2CAP data failed");
l2cap_send_disconn_req(chan, ECONNRESET);
return err;
}
/* Update credits whenever an SDU is received */
l2cap_chan_le_send_credits(chan);
@ -6565,7 +6596,8 @@ static int l2cap_ecred_data_rcv(struct l2cap_chan *chan, struct sk_buff *skb)
}
chan->rx_credits--;
BT_DBG("rx_credits %u -> %u", chan->rx_credits + 1, chan->rx_credits);
BT_DBG("chan %p: rx_credits %u -> %u",
chan, chan->rx_credits + 1, chan->rx_credits);
/* Update if remote had run out of credits, this should only happens
* if the remote is not using the entire MPS.
@ -6848,18 +6880,7 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)
BT_DBG("hcon %p conn %p hchan %p", hcon, conn, hchan);
switch (hcon->type) {
case LE_LINK:
if (hcon->hdev->le_mtu) {
conn->mtu = hcon->hdev->le_mtu;
break;
}
fallthrough;
default:
conn->mtu = hcon->hdev->acl_mtu;
break;
}
conn->mtu = hcon->mtu;
conn->feat_mask = 0;
conn->local_fixed_chan = L2CAP_FC_SIG_BREDR | L2CAP_FC_CONNLESS;
@ -7113,14 +7134,11 @@ EXPORT_SYMBOL_GPL(l2cap_chan_connect);
static void l2cap_ecred_reconfigure(struct l2cap_chan *chan)
{
struct l2cap_conn *conn = chan->conn;
struct {
struct l2cap_ecred_reconf_req req;
__le16 scid;
} pdu;
DEFINE_RAW_FLEX(struct l2cap_ecred_reconf_req, pdu, scid, 1);
pdu.req.mtu = cpu_to_le16(chan->imtu);
pdu.req.mps = cpu_to_le16(chan->mps);
pdu.scid = cpu_to_le16(chan->scid);
pdu->mtu = cpu_to_le16(chan->imtu);
pdu->mps = cpu_to_le16(chan->mps);
pdu->scid[0] = cpu_to_le16(chan->scid);
chan->ident = l2cap_get_ident(conn);
@ -7464,10 +7482,6 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
struct l2cap_conn *conn = hcon->l2cap_data;
int len;
/* For AMP controller do not create l2cap conn */
if (!conn && hcon->hdev->dev_type != HCI_PRIMARY)
goto drop;
if (!conn)
conn = l2cap_conn_add(hcon);

View File

@ -1131,6 +1131,34 @@ static int l2cap_sock_sendmsg(struct socket *sock, struct msghdr *msg,
return err;
}
static void l2cap_publish_rx_avail(struct l2cap_chan *chan)
{
struct sock *sk = chan->data;
ssize_t avail = sk->sk_rcvbuf - atomic_read(&sk->sk_rmem_alloc);
int expected_skbs, skb_overhead;
if (avail <= 0) {
l2cap_chan_rx_avail(chan, 0);
return;
}
if (!chan->mps) {
l2cap_chan_rx_avail(chan, -1);
return;
}
/* Correct available memory by estimated sk_buff overhead.
* This is significant due to small transfer sizes. However, accept
* at least one full packet if receive space is non-zero.
*/
expected_skbs = DIV_ROUND_UP(avail, chan->mps);
skb_overhead = expected_skbs * sizeof(struct sk_buff);
if (skb_overhead < avail)
l2cap_chan_rx_avail(chan, avail - skb_overhead);
else
l2cap_chan_rx_avail(chan, -1);
}
static int l2cap_sock_recvmsg(struct socket *sock, struct msghdr *msg,
size_t len, int flags)
{
@ -1167,28 +1195,33 @@ static int l2cap_sock_recvmsg(struct socket *sock, struct msghdr *msg,
else
err = bt_sock_recvmsg(sock, msg, len, flags);
if (pi->chan->mode != L2CAP_MODE_ERTM)
if (pi->chan->mode != L2CAP_MODE_ERTM &&
pi->chan->mode != L2CAP_MODE_LE_FLOWCTL &&
pi->chan->mode != L2CAP_MODE_EXT_FLOWCTL)
return err;
/* Attempt to put pending rx data in the socket buffer */
lock_sock(sk);
if (!test_bit(CONN_LOCAL_BUSY, &pi->chan->conn_state))
goto done;
l2cap_publish_rx_avail(pi->chan);
if (pi->rx_busy_skb) {
if (!__sock_queue_rcv_skb(sk, pi->rx_busy_skb))
pi->rx_busy_skb = NULL;
else
/* Attempt to put pending rx data in the socket buffer */
while (!list_empty(&pi->rx_busy)) {
struct l2cap_rx_busy *rx_busy =
list_first_entry(&pi->rx_busy,
struct l2cap_rx_busy,
list);
if (__sock_queue_rcv_skb(sk, rx_busy->skb) < 0)
goto done;
list_del(&rx_busy->list);
kfree(rx_busy);
}
/* Restore data flow when half of the receive buffer is
* available. This avoids resending large numbers of
* frames.
*/
if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf >> 1)
if (test_bit(CONN_LOCAL_BUSY, &pi->chan->conn_state) &&
atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf >> 1)
l2cap_chan_busy(pi->chan, 0);
done:
@ -1449,17 +1482,20 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan)
static int l2cap_sock_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
{
struct sock *sk = chan->data;
struct l2cap_pinfo *pi = l2cap_pi(sk);
int err;
lock_sock(sk);
if (l2cap_pi(sk)->rx_busy_skb) {
if (chan->mode == L2CAP_MODE_ERTM && !list_empty(&pi->rx_busy)) {
err = -ENOMEM;
goto done;
}
if (chan->mode != L2CAP_MODE_ERTM &&
chan->mode != L2CAP_MODE_STREAMING) {
chan->mode != L2CAP_MODE_STREAMING &&
chan->mode != L2CAP_MODE_LE_FLOWCTL &&
chan->mode != L2CAP_MODE_EXT_FLOWCTL) {
/* Even if no filter is attached, we could potentially
* get errors from security modules, etc.
*/
@ -1470,7 +1506,9 @@ static int l2cap_sock_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
err = __sock_queue_rcv_skb(sk, skb);
/* For ERTM, handle one skb that doesn't fit into the recv
l2cap_publish_rx_avail(chan);
/* For ERTM and LE, handle a skb that doesn't fit into the recv
* buffer. This is important to do because the data frames
* have already been acked, so the skb cannot be discarded.
*
@ -1479,8 +1517,18 @@ static int l2cap_sock_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
* acked and reassembled until there is buffer space
* available.
*/
if (err < 0 && chan->mode == L2CAP_MODE_ERTM) {
l2cap_pi(sk)->rx_busy_skb = skb;
if (err < 0 &&
(chan->mode == L2CAP_MODE_ERTM ||
chan->mode == L2CAP_MODE_LE_FLOWCTL ||
chan->mode == L2CAP_MODE_EXT_FLOWCTL)) {
struct l2cap_rx_busy *rx_busy =
kmalloc(sizeof(*rx_busy), GFP_KERNEL);
if (!rx_busy) {
err = -ENOMEM;
goto done;
}
rx_busy->skb = skb;
list_add_tail(&rx_busy->list, &pi->rx_busy);
l2cap_chan_busy(chan, 1);
err = 0;
}
@ -1706,6 +1754,8 @@ static const struct l2cap_ops l2cap_chan_ops = {
static void l2cap_sock_destruct(struct sock *sk)
{
struct l2cap_rx_busy *rx_busy, *next;
BT_DBG("sk %p", sk);
if (l2cap_pi(sk)->chan) {
@ -1713,9 +1763,10 @@ static void l2cap_sock_destruct(struct sock *sk)
l2cap_chan_put(l2cap_pi(sk)->chan);
}
if (l2cap_pi(sk)->rx_busy_skb) {
kfree_skb(l2cap_pi(sk)->rx_busy_skb);
l2cap_pi(sk)->rx_busy_skb = NULL;
list_for_each_entry_safe(rx_busy, next, &l2cap_pi(sk)->rx_busy, list) {
kfree_skb(rx_busy->skb);
list_del(&rx_busy->list);
kfree(rx_busy);
}
skb_queue_purge(&sk->sk_receive_queue);
@ -1799,6 +1850,8 @@ static void l2cap_sock_init(struct sock *sk, struct sock *parent)
chan->data = sk;
chan->ops = &l2cap_chan_ops;
l2cap_publish_rx_avail(chan);
}
static struct proto l2cap_proto = {
@ -1820,6 +1873,8 @@ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
sk->sk_destruct = l2cap_sock_destruct;
sk->sk_sndtimeo = L2CAP_CONN_TIMEOUT;
INIT_LIST_HEAD(&l2cap_pi(sk)->rx_busy);
chan = l2cap_chan_create();
if (!chan) {
sk_free(sk);

View File

@ -443,8 +443,7 @@ static int read_index_list(struct sock *sk, struct hci_dev *hdev, void *data,
count = 0;
list_for_each_entry(d, &hci_dev_list, list) {
if (d->dev_type == HCI_PRIMARY &&
!hci_dev_test_flag(d, HCI_UNCONFIGURED))
if (!hci_dev_test_flag(d, HCI_UNCONFIGURED))
count++;
}
@ -468,8 +467,7 @@ static int read_index_list(struct sock *sk, struct hci_dev *hdev, void *data,
if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
continue;
if (d->dev_type == HCI_PRIMARY &&
!hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
if (!hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
rp->index[count++] = cpu_to_le16(d->id);
bt_dev_dbg(hdev, "Added hci%u", d->id);
}
@ -503,8 +501,7 @@ static int read_unconf_index_list(struct sock *sk, struct hci_dev *hdev,
count = 0;
list_for_each_entry(d, &hci_dev_list, list) {
if (d->dev_type == HCI_PRIMARY &&
hci_dev_test_flag(d, HCI_UNCONFIGURED))
if (hci_dev_test_flag(d, HCI_UNCONFIGURED))
count++;
}
@ -528,8 +525,7 @@ static int read_unconf_index_list(struct sock *sk, struct hci_dev *hdev,
if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
continue;
if (d->dev_type == HCI_PRIMARY &&
hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
if (hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
rp->index[count++] = cpu_to_le16(d->id);
bt_dev_dbg(hdev, "Added hci%u", d->id);
}
@ -561,10 +557,8 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
read_lock(&hci_dev_list_lock);
count = 0;
list_for_each_entry(d, &hci_dev_list, list) {
if (d->dev_type == HCI_PRIMARY || d->dev_type == HCI_AMP)
count++;
}
list_for_each_entry(d, &hci_dev_list, list)
count++;
rp = kmalloc(struct_size(rp, entry, count), GFP_ATOMIC);
if (!rp) {
@ -585,16 +579,10 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
continue;
if (d->dev_type == HCI_PRIMARY) {
if (hci_dev_test_flag(d, HCI_UNCONFIGURED))
rp->entry[count].type = 0x01;
else
rp->entry[count].type = 0x00;
} else if (d->dev_type == HCI_AMP) {
rp->entry[count].type = 0x02;
} else {
continue;
}
if (hci_dev_test_flag(d, HCI_UNCONFIGURED))
rp->entry[count].type = 0x01;
else
rp->entry[count].type = 0x00;
rp->entry[count].bus = d->bus;
rp->entry[count++].index = cpu_to_le16(d->id);
@ -9331,23 +9319,14 @@ void mgmt_index_added(struct hci_dev *hdev)
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
return;
switch (hdev->dev_type) {
case HCI_PRIMARY:
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
mgmt_index_event(MGMT_EV_UNCONF_INDEX_ADDED, hdev,
NULL, 0, HCI_MGMT_UNCONF_INDEX_EVENTS);
ev.type = 0x01;
} else {
mgmt_index_event(MGMT_EV_INDEX_ADDED, hdev, NULL, 0,
HCI_MGMT_INDEX_EVENTS);
ev.type = 0x00;
}
break;
case HCI_AMP:
ev.type = 0x02;
break;
default:
return;
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
mgmt_index_event(MGMT_EV_UNCONF_INDEX_ADDED, hdev, NULL, 0,
HCI_MGMT_UNCONF_INDEX_EVENTS);
ev.type = 0x01;
} else {
mgmt_index_event(MGMT_EV_INDEX_ADDED, hdev, NULL, 0,
HCI_MGMT_INDEX_EVENTS);
ev.type = 0x00;
}
ev.bus = hdev->bus;
@ -9364,25 +9343,16 @@ void mgmt_index_removed(struct hci_dev *hdev)
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
return;
switch (hdev->dev_type) {
case HCI_PRIMARY:
mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev,
NULL, 0, HCI_MGMT_UNCONF_INDEX_EVENTS);
ev.type = 0x01;
} else {
mgmt_index_event(MGMT_EV_INDEX_REMOVED, hdev, NULL, 0,
HCI_MGMT_INDEX_EVENTS);
ev.type = 0x00;
}
break;
case HCI_AMP:
ev.type = 0x02;
break;
default:
return;
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
HCI_MGMT_UNCONF_INDEX_EVENTS);
ev.type = 0x01;
} else {
mgmt_index_event(MGMT_EV_INDEX_REMOVED, hdev, NULL, 0,
HCI_MGMT_INDEX_EVENTS);
ev.type = 0x00;
}
ev.bus = hdev->bus;

View File

@ -126,7 +126,6 @@ static void sco_sock_clear_timer(struct sock *sk)
/* ---- SCO connections ---- */
static struct sco_conn *sco_conn_add(struct hci_conn *hcon)
{
struct hci_dev *hdev = hcon->hdev;
struct sco_conn *conn = hcon->sco_data;
if (conn) {
@ -144,9 +143,10 @@ static struct sco_conn *sco_conn_add(struct hci_conn *hcon)
hcon->sco_data = conn;
conn->hcon = hcon;
conn->mtu = hcon->mtu;
if (hdev->sco_mtu > 0)
conn->mtu = hdev->sco_mtu;
if (hcon->mtu > 0)
conn->mtu = hcon->mtu;
else
conn->mtu = 60;