mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 12:28:41 +08:00
pci-v5.12-changes
-----BEGIN PGP SIGNATURE----- iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmA2xiQUHGJoZWxnYWFz QGdvb2dsZS5jb20ACgkQWYigwDrT+vzRDA/9GCyEskI9DMtyT9UeoTMzpHcUZpaU eCbLa2BSPjOKlrHLnPY7IwE0nT7ihe4OOcm8uOYOWtulE46XJNCHfxlUYP3SbI0Y JlG0FBCh4ldzCzzKsftwkSvVhk+gn+ms9ucJ8q2iBSOXVhG/41IbX7++8IfbQM4v VHjdYUmTCCiOSRDtBVi82p4+GAHxH8IhaB0gDNb1Q7myj+qJKL5nKjK/nukgO0fO UpCnSxyua48Ij+c59Y1QAIhGeORq5Gg5Q4ussY3FxS9ovhZODEGQwCFniTfilqRw wEB9Fb8tiPY60ljEyDPnERMkiW69zutTJqOY4LfwmoRM9IEbxD6VPIqF5gin8sB7 pHhX4KUU+eB1hQdK9SGKjkwyehquNKzTdxsu2jccltOKwBm5jcXYeOvu2bJTzZn+ rrZPYJoA1dQig3bEuOzsBxvW4Jaj7IsVfVcao4OzXyh8Y7tLr9kVDXxr7JC/EkPM zRK24yglERD2J1JXgNMvOuJQj6JmRHhEbV/faZci8x8ZEaz1FawRAUZqHf/gGmnW 2CllarHbRnchPyD8btv03Mp84WG6fCfKy7zG2D8HxOsiStDO/5ICehHtGcvYg7IL RuE4Tj8OKdcbw/8cO4C3842FqiSj34+jooNIHSLyBqcpJam6VsN4XqNIZCL+DeG5 Q2JXruAaahTWOZg= =GXL5 -----END PGP SIGNATURE----- Merge tag 'pci-v5.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull PCI updates from Bjorn Helgaas: "Enumeration: - Remove unnecessary locking around _OSC (Bjorn Helgaas) - Clarify message about _OSC failure (Bjorn Helgaas) - Remove notification of PCIe bandwidth changes (Bjorn Helgaas) - Tidy checking of syscall user config accessors (Heiner Kallweit) Resource management: - Decline to resize resources if boot config must be preserved (Ard Biesheuvel) - Fix pci_register_io_range() memory leak (Geert Uytterhoeven) Error handling (Keith Busch): - Clear error status from the correct device - Retain error recovery status so drivers can use it after reset - Log the type of Port (Root or Switch Downstream) that we reset - Always request a reset for Downstream Ports in frozen state Endpoint framework and NTB (Kishon Vijay Abraham I): - Make *_get_first_free_bar() take into account 64 bit BAR - Add helper API to get the 'next' unreserved BAR - Make *_free_bar() return error codes on failure - Remove unused pci_epf_match_device() - Add support to associate secondary EPC with EPF - Add support in configfs to associate two EPCs with EPF - Add pci_epc_ops to map MSI IRQ - Add pci_epf_ops to expose function-specific attrs - Allow user to create sub-directory of 'EPF Device' directory - Implement ->msi_map_irq() ops for cadence - Configure LM_EP_FUNC_CFG based on epc->function_num_map for cadence - Add EP function driver to provide NTB functionality - Add support for EPF PCI Non-Transparent Bridge - Add specification for PCI NTB function device - Add PCI endpoint NTB function user guide - Add configfs binding documentation for pci-ntb endpoint function Broadcom STB PCIe controller driver: - Add support for BCM4908 and external PERST# signal controller (Rafał Miłecki) Cadence PCIe controller driver: - Retrain Link to work around Gen2 training defect (Nadeem Athani) - Fix merge botch in cdns_pcie_host_map_dma_ranges() (Krzysztof Wilczyński) Freescale Layerscape PCIe controller driver: - Add LX2160A rev2 EP mode support (Hou Zhiqiang) - Convert to builtin_platform_driver() (Michael Walle) MediaTek PCIe controller driver: - Fix OF node reference leak (Krzysztof Wilczyński) Microchip PolarFlare PCIe controller driver: - Add Microchip PolarFire PCIe controller driver (Daire McNamara) Qualcomm PCIe controller driver: - Use PHY_REFCLK_USE_PAD only for ipq8064 (Ansuel Smith) - Add support for ddrss_sf_tbu clock for sm8250 (Dmitry Baryshkov) Renesas R-Car PCIe controller driver: - Drop PCIE_RCAR config option (Lad Prabhakar) - Always allocate MSI addresses in 32bit space (Marek Vasut) Rockchip PCIe controller driver: - Add FriendlyARM NanoPi M4B DT binding (Chen-Yu Tsai) - Make 'ep-gpios' DT property optional (Chen-Yu Tsai) Synopsys DesignWare PCIe controller driver: - Work around ECRC configuration hardware defect (Vidya Sagar) - Drop support for config space in DT 'ranges' (Rob Herring) - Change size to u64 for EP outbound iATU (Shradha Todi) - Add upper limit address for outbound iATU (Shradha Todi) - Make dw_pcie ops optional (Jisheng Zhang) - Remove unnecessary dw_pcie_ops from al driver (Jisheng Zhang) Xilinx Versal CPM PCIe controller driver: - Fix OF node reference leak (Pan Bian) Miscellaneous: - Remove tango host controller driver (Arnd Bergmann) - Remove IRQ handler & data together (altera-msi, brcmstb, dwc) (Martin Kaiser) - Fix xgene-msi race in installing chained IRQ handler (Martin Kaiser) - Apply CONFIG_PCI_DEBUG to entire drivers/pci hierarchy (Junhao He) - Fix pci-bridge-emul array overruns (Russell King) - Remove obsolete uses of WARN_ON(in_interrupt()) (Sebastian Andrzej Siewior)" * tag 'pci-v5.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (69 commits) PCI: qcom: Use PHY_REFCLK_USE_PAD only for ipq8064 PCI: qcom: Add support for ddrss_sf_tbu clock dt-bindings: PCI: qcom: Document ddrss_sf_tbu clock for sm8250 PCI: al: Remove useless dw_pcie_ops PCI: dwc: Don't assume the ops in dw_pcie always exist PCI: dwc: Add upper limit address for outbound iATU PCI: dwc: Change size to u64 for EP outbound iATU PCI: dwc: Drop support for config space in 'ranges' PCI: layerscape: Convert to builtin_platform_driver() PCI: layerscape: Add LX2160A rev2 EP mode support dt-bindings: PCI: layerscape: Add LX2160A rev2 compatible strings PCI: dwc: Work around ECRC configuration issue PCI/portdrv: Report reset for frozen channel PCI/AER: Specify the type of Port that was reset PCI/ERR: Retain status from error notification PCI/AER: Clear AER status from Root Port when resetting Downstream Port PCI/ERR: Clear status of the reporting device dt-bindings: arm: rockchip: Add FriendlyARM NanoPi M4B PCI: rockchip: Make 'ep-gpios' DT property optional Documentation: PCI: Add PCI endpoint NTB function user guide ...
This commit is contained in:
commit
5b47b10e8f
38
Documentation/PCI/endpoint/function/binding/pci-ntb.rst
Normal file
38
Documentation/PCI/endpoint/function/binding/pci-ntb.rst
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
.. SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
==========================
|
||||||
|
PCI NTB Endpoint Function
|
||||||
|
==========================
|
||||||
|
|
||||||
|
1) Create a subdirectory to pci_epf_ntb directory in configfs.
|
||||||
|
|
||||||
|
Standard EPF Configurable Fields:
|
||||||
|
|
||||||
|
================ ===========================================================
|
||||||
|
vendorid should be 0x104c
|
||||||
|
deviceid should be 0xb00d for TI's J721E SoC
|
||||||
|
revid don't care
|
||||||
|
progif_code don't care
|
||||||
|
subclass_code should be 0x00
|
||||||
|
baseclass_code should be 0x5
|
||||||
|
cache_line_size don't care
|
||||||
|
subsys_vendor_id don't care
|
||||||
|
subsys_id don't care
|
||||||
|
interrupt_pin don't care
|
||||||
|
msi_interrupts don't care
|
||||||
|
msix_interrupts don't care
|
||||||
|
================ ===========================================================
|
||||||
|
|
||||||
|
2) Create a subdirectory to directory created in 1
|
||||||
|
|
||||||
|
NTB EPF specific configurable fields:
|
||||||
|
|
||||||
|
================ ===========================================================
|
||||||
|
db_count Number of doorbells; default = 4
|
||||||
|
mw1 size of memory window1
|
||||||
|
mw2 size of memory window2
|
||||||
|
mw3 size of memory window3
|
||||||
|
mw4 size of memory window4
|
||||||
|
num_mws Number of memory windows; max = 4
|
||||||
|
spad_count Number of scratchpad registers; default = 64
|
||||||
|
================ ===========================================================
|
@ -11,5 +11,8 @@ PCI Endpoint Framework
|
|||||||
pci-endpoint-cfs
|
pci-endpoint-cfs
|
||||||
pci-test-function
|
pci-test-function
|
||||||
pci-test-howto
|
pci-test-howto
|
||||||
|
pci-ntb-function
|
||||||
|
pci-ntb-howto
|
||||||
|
|
||||||
function/binding/pci-test
|
function/binding/pci-test
|
||||||
|
function/binding/pci-ntb
|
||||||
|
@ -68,6 +68,16 @@ created)
|
|||||||
... subsys_vendor_id
|
... subsys_vendor_id
|
||||||
... subsys_id
|
... subsys_id
|
||||||
... interrupt_pin
|
... interrupt_pin
|
||||||
|
... primary/
|
||||||
|
... <Symlink EPC Device1>/
|
||||||
|
... secondary/
|
||||||
|
... <Symlink EPC Device2>/
|
||||||
|
|
||||||
|
If an EPF device has to be associated with 2 EPCs (like in the case of
|
||||||
|
Non-transparent bridge), symlink of endpoint controller connected to primary
|
||||||
|
interface should be added in 'primary' directory and symlink of endpoint
|
||||||
|
controller connected to secondary interface should be added in 'secondary'
|
||||||
|
directory.
|
||||||
|
|
||||||
EPC Device
|
EPC Device
|
||||||
==========
|
==========
|
||||||
|
348
Documentation/PCI/endpoint/pci-ntb-function.rst
Normal file
348
Documentation/PCI/endpoint/pci-ntb-function.rst
Normal file
@ -0,0 +1,348 @@
|
|||||||
|
.. SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
=================
|
||||||
|
PCI NTB Function
|
||||||
|
=================
|
||||||
|
|
||||||
|
:Author: Kishon Vijay Abraham I <kishon@ti.com>
|
||||||
|
|
||||||
|
PCI Non-Transparent Bridges (NTB) allow two host systems to communicate
|
||||||
|
with each other by exposing each host as a device to the other host.
|
||||||
|
NTBs typically support the ability to generate interrupts on the remote
|
||||||
|
machine, expose memory ranges as BARs, and perform DMA. They also support
|
||||||
|
scratchpads, which are areas of memory within the NTB that are accessible
|
||||||
|
from both machines.
|
||||||
|
|
||||||
|
PCI NTB Function allows two different systems (or hosts) to communicate
|
||||||
|
with each other by configuring the endpoint instances in such a way that
|
||||||
|
transactions from one system are routed to the other system.
|
||||||
|
|
||||||
|
In the below diagram, PCI NTB function configures the SoC with multiple
|
||||||
|
PCI Endpoint (EP) instances in such a way that transactions from one EP
|
||||||
|
controller are routed to the other EP controller. Once PCI NTB function
|
||||||
|
configures the SoC with multiple EP instances, HOST1 and HOST2 can
|
||||||
|
communicate with each other using SoC as a bridge.
|
||||||
|
|
||||||
|
.. code-block:: text
|
||||||
|
|
||||||
|
+-------------+ +-------------+
|
||||||
|
| | | |
|
||||||
|
| HOST1 | | HOST2 |
|
||||||
|
| | | |
|
||||||
|
+------^------+ +------^------+
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
+---------|-------------------------------------------------|---------+
|
||||||
|
| +------v------+ +------v------+ |
|
||||||
|
| | | | | |
|
||||||
|
| | EP | | EP | |
|
||||||
|
| | CONTROLLER1 | | CONTROLLER2 | |
|
||||||
|
| | <-----------------------------------> | |
|
||||||
|
| | | | | |
|
||||||
|
| | | | | |
|
||||||
|
| | | SoC With Multiple EP Instances | | |
|
||||||
|
| | | (Configured using NTB Function) | | |
|
||||||
|
| +-------------+ +-------------+ |
|
||||||
|
+---------------------------------------------------------------------+
|
||||||
|
|
||||||
|
Constructs used for Implementing NTB
|
||||||
|
====================================
|
||||||
|
|
||||||
|
1) Config Region
|
||||||
|
2) Self Scratchpad Registers
|
||||||
|
3) Peer Scratchpad Registers
|
||||||
|
4) Doorbell (DB) Registers
|
||||||
|
5) Memory Window (MW)
|
||||||
|
|
||||||
|
|
||||||
|
Config Region:
|
||||||
|
--------------
|
||||||
|
|
||||||
|
Config Region is a construct that is specific to NTB implemented using NTB
|
||||||
|
Endpoint Function Driver. The host and endpoint side NTB function driver will
|
||||||
|
exchange information with each other using this region. Config Region has
|
||||||
|
Control/Status Registers for configuring the Endpoint Controller. Host can
|
||||||
|
write into this region for configuring the outbound Address Translation Unit
|
||||||
|
(ATU) and to indicate the link status. Endpoint can indicate the status of
|
||||||
|
commands issued by host in this region. Endpoint can also indicate the
|
||||||
|
scratchpad offset and number of memory windows to the host using this region.
|
||||||
|
|
||||||
|
The format of Config Region is given below. All the fields here are 32 bits.
|
||||||
|
|
||||||
|
.. code-block:: text
|
||||||
|
|
||||||
|
+------------------------+
|
||||||
|
| COMMAND |
|
||||||
|
+------------------------+
|
||||||
|
| ARGUMENT |
|
||||||
|
+------------------------+
|
||||||
|
| STATUS |
|
||||||
|
+------------------------+
|
||||||
|
| TOPOLOGY |
|
||||||
|
+------------------------+
|
||||||
|
| ADDRESS (LOWER 32) |
|
||||||
|
+------------------------+
|
||||||
|
| ADDRESS (UPPER 32) |
|
||||||
|
+------------------------+
|
||||||
|
| SIZE |
|
||||||
|
+------------------------+
|
||||||
|
| NO OF MEMORY WINDOW |
|
||||||
|
+------------------------+
|
||||||
|
| MEMORY WINDOW1 OFFSET |
|
||||||
|
+------------------------+
|
||||||
|
| SPAD OFFSET |
|
||||||
|
+------------------------+
|
||||||
|
| SPAD COUNT |
|
||||||
|
+------------------------+
|
||||||
|
| DB ENTRY SIZE |
|
||||||
|
+------------------------+
|
||||||
|
| DB DATA |
|
||||||
|
+------------------------+
|
||||||
|
| : |
|
||||||
|
+------------------------+
|
||||||
|
| : |
|
||||||
|
+------------------------+
|
||||||
|
| DB DATA |
|
||||||
|
+------------------------+
|
||||||
|
|
||||||
|
|
||||||
|
COMMAND:
|
||||||
|
|
||||||
|
NTB function supports three commands:
|
||||||
|
|
||||||
|
CMD_CONFIGURE_DOORBELL (0x1): Command to configure doorbell. Before
|
||||||
|
invoking this command, the host should allocate and initialize
|
||||||
|
MSI/MSI-X vectors (i.e., initialize the MSI/MSI-X Capability in the
|
||||||
|
Endpoint). The endpoint on receiving this command will configure
|
||||||
|
the outbound ATU such that transactions to Doorbell BAR will be routed
|
||||||
|
to the MSI/MSI-X address programmed by the host. The ARGUMENT
|
||||||
|
register should be populated with number of DBs to configure (in the
|
||||||
|
lower 16 bits) and if MSI or MSI-X should be configured (BIT 16).
|
||||||
|
|
||||||
|
CMD_CONFIGURE_MW (0x2): Command to configure memory window (MW). The
|
||||||
|
host invokes this command after allocating a buffer that can be
|
||||||
|
accessed by remote host. The allocated address should be programmed
|
||||||
|
in the ADDRESS register (64 bit), the size should be programmed in
|
||||||
|
the SIZE register and the memory window index should be programmed
|
||||||
|
in the ARGUMENT register. The endpoint on receiving this command
|
||||||
|
will configure the outbound ATU such that transactions to MW BAR
|
||||||
|
are routed to the address provided by the host.
|
||||||
|
|
||||||
|
CMD_LINK_UP (0x3): Command to indicate an NTB application is
|
||||||
|
bound to the EP device on the host side. Once the endpoint
|
||||||
|
receives this command from both the hosts, the endpoint will
|
||||||
|
raise a LINK_UP event to both the hosts to indicate the host
|
||||||
|
NTB applications can start communicating with each other.
|
||||||
|
|
||||||
|
ARGUMENT:
|
||||||
|
|
||||||
|
The value of this register is based on the commands issued in
|
||||||
|
command register. See COMMAND section for more information.
|
||||||
|
|
||||||
|
TOPOLOGY:
|
||||||
|
|
||||||
|
Set to NTB_TOPO_B2B_USD for Primary interface
|
||||||
|
Set to NTB_TOPO_B2B_DSD for Secondary interface
|
||||||
|
|
||||||
|
ADDRESS/SIZE:
|
||||||
|
|
||||||
|
Address and Size to be used while configuring the memory window.
|
||||||
|
See "CMD_CONFIGURE_MW" for more info.
|
||||||
|
|
||||||
|
MEMORY WINDOW1 OFFSET:
|
||||||
|
|
||||||
|
Memory Window 1 and Doorbell registers are packed together in the
|
||||||
|
same BAR. The initial portion of the region will have doorbell
|
||||||
|
registers and the latter portion of the region is for memory window 1.
|
||||||
|
This register will specify the offset of the memory window 1.
|
||||||
|
|
||||||
|
NO OF MEMORY WINDOW:
|
||||||
|
|
||||||
|
Specifies the number of memory windows supported by the NTB device.
|
||||||
|
|
||||||
|
SPAD OFFSET:
|
||||||
|
|
||||||
|
Self scratchpad region and config region are packed together in the
|
||||||
|
same BAR. The initial portion of the region will have config region
|
||||||
|
and the latter portion of the region is for self scratchpad. This
|
||||||
|
register will specify the offset of the self scratchpad registers.
|
||||||
|
|
||||||
|
SPAD COUNT:
|
||||||
|
|
||||||
|
Specifies the number of scratchpad registers supported by the NTB
|
||||||
|
device.
|
||||||
|
|
||||||
|
DB ENTRY SIZE:
|
||||||
|
|
||||||
|
Used to determine the offset within the DB BAR that should be written
|
||||||
|
in order to raise doorbell. EPF NTB can use either MSI or MSI-X to
|
||||||
|
ring doorbell (MSI-X support will be added later). MSI uses same
|
||||||
|
address for all the interrupts and MSI-X can provide different
|
||||||
|
addresses for different interrupts. The MSI/MSI-X address is provided
|
||||||
|
by the host and the address it gives is based on the MSI/MSI-X
|
||||||
|
implementation supported by the host. For instance, ARM platform
|
||||||
|
using GIC ITS will have the same MSI-X address for all the interrupts.
|
||||||
|
In order to support all the combinations and use the same mechanism
|
||||||
|
for both MSI and MSI-X, EPF NTB allocates a separate region in the
|
||||||
|
Outbound Address Space for each of the interrupts. This region will
|
||||||
|
be mapped to the MSI/MSI-X address provided by the host. If a host
|
||||||
|
provides the same address for all the interrupts, all the regions
|
||||||
|
will be translated to the same address. If a host provides different
|
||||||
|
addresses, the regions will be translated to different addresses. This
|
||||||
|
will ensure there is no difference while raising the doorbell.
|
||||||
|
|
||||||
|
DB DATA:
|
||||||
|
|
||||||
|
EPF NTB supports 32 interrupts, so there are 32 DB DATA registers.
|
||||||
|
This holds the MSI/MSI-X data that has to be written to MSI address
|
||||||
|
for raising doorbell interrupt. This will be populated by EPF NTB
|
||||||
|
while invoking CMD_CONFIGURE_DOORBELL.
|
||||||
|
|
||||||
|
Scratchpad Registers:
|
||||||
|
---------------------
|
||||||
|
|
||||||
|
Each host has its own register space allocated in the memory of NTB endpoint
|
||||||
|
controller. They are both readable and writable from both sides of the bridge.
|
||||||
|
They are used by applications built over NTB and can be used to pass control
|
||||||
|
and status information between both sides of a device.
|
||||||
|
|
||||||
|
Scratchpad registers has 2 parts
|
||||||
|
1) Self Scratchpad: Host's own register space
|
||||||
|
2) Peer Scratchpad: Remote host's register space.
|
||||||
|
|
||||||
|
Doorbell Registers:
|
||||||
|
-------------------
|
||||||
|
|
||||||
|
Doorbell Registers are used by the hosts to interrupt each other.
|
||||||
|
|
||||||
|
Memory Window:
|
||||||
|
--------------
|
||||||
|
|
||||||
|
Actual transfer of data between the two hosts will happen using the
|
||||||
|
memory window.
|
||||||
|
|
||||||
|
Modeling Constructs:
|
||||||
|
====================
|
||||||
|
|
||||||
|
There are 5 or more distinct regions (config, self scratchpad, peer
|
||||||
|
scratchpad, doorbell, one or more memory windows) to be modeled to achieve
|
||||||
|
NTB functionality. At least one memory window is required while more than
|
||||||
|
one is permitted. All these regions should be mapped to BARs for hosts to
|
||||||
|
access these regions.
|
||||||
|
|
||||||
|
If one 32-bit BAR is allocated for each of these regions, the scheme would
|
||||||
|
look like this:
|
||||||
|
|
||||||
|
====== ===============
|
||||||
|
BAR NO CONSTRUCTS USED
|
||||||
|
====== ===============
|
||||||
|
BAR0 Config Region
|
||||||
|
BAR1 Self Scratchpad
|
||||||
|
BAR2 Peer Scratchpad
|
||||||
|
BAR3 Doorbell
|
||||||
|
BAR4 Memory Window 1
|
||||||
|
BAR5 Memory Window 2
|
||||||
|
====== ===============
|
||||||
|
|
||||||
|
However if we allocate a separate BAR for each of the regions, there would not
|
||||||
|
be enough BARs for all the regions in a platform that supports only 64-bit
|
||||||
|
BARs.
|
||||||
|
|
||||||
|
In order to be supported by most of the platforms, the regions should be
|
||||||
|
packed and mapped to BARs in a way that provides NTB functionality and
|
||||||
|
also makes sure the host doesn't access any region that it is not supposed
|
||||||
|
to.
|
||||||
|
|
||||||
|
The following scheme is used in EPF NTB Function:
|
||||||
|
|
||||||
|
====== ===============================
|
||||||
|
BAR NO CONSTRUCTS USED
|
||||||
|
====== ===============================
|
||||||
|
BAR0 Config Region + Self Scratchpad
|
||||||
|
BAR1 Peer Scratchpad
|
||||||
|
BAR2 Doorbell + Memory Window 1
|
||||||
|
BAR3 Memory Window 2
|
||||||
|
BAR4 Memory Window 3
|
||||||
|
BAR5 Memory Window 4
|
||||||
|
====== ===============================
|
||||||
|
|
||||||
|
With this scheme, for the basic NTB functionality 3 BARs should be sufficient.
|
||||||
|
|
||||||
|
Modeling Config/Scratchpad Region:
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
.. code-block:: text
|
||||||
|
|
||||||
|
+-----------------+------->+------------------+ +-----------------+
|
||||||
|
| BAR0 | | CONFIG REGION | | BAR0 |
|
||||||
|
+-----------------+----+ +------------------+<-------+-----------------+
|
||||||
|
| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
|
||||||
|
+-----------------+ +-->+------------------+<-------+-----------------+
|
||||||
|
| BAR2 | Local Memory | BAR2 |
|
||||||
|
+-----------------+ +-----------------+
|
||||||
|
| BAR3 | | BAR3 |
|
||||||
|
+-----------------+ +-----------------+
|
||||||
|
| BAR4 | | BAR4 |
|
||||||
|
+-----------------+ +-----------------+
|
||||||
|
| BAR5 | | BAR5 |
|
||||||
|
+-----------------+ +-----------------+
|
||||||
|
EP CONTROLLER 1 EP CONTROLLER 2
|
||||||
|
|
||||||
|
Above diagram shows Config region + Scratchpad region for HOST1 (connected to
|
||||||
|
EP controller 1) allocated in local memory. The HOST1 can access the config
|
||||||
|
region and scratchpad region (self scratchpad) using BAR0 of EP controller 1.
|
||||||
|
The peer host (HOST2 connected to EP controller 2) can also access this
|
||||||
|
scratchpad region (peer scratchpad) using BAR1 of EP controller 2. This
|
||||||
|
diagram shows the case where Config region and Scratchpad regions are allocated
|
||||||
|
for HOST1, however the same is applicable for HOST2.
|
||||||
|
|
||||||
|
Modeling Doorbell/Memory Window 1:
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
.. code-block:: text
|
||||||
|
|
||||||
|
+-----------------+ +----->+----------------+-----------+-----------------+
|
||||||
|
| BAR0 | | | Doorbell 1 +-----------> MSI-X ADDRESS 1 |
|
||||||
|
+-----------------+ | +----------------+ +-----------------+
|
||||||
|
| BAR1 | | | Doorbell 2 +---------+ | |
|
||||||
|
+-----------------+----+ +----------------+ | | |
|
||||||
|
| BAR2 | | Doorbell 3 +-------+ | +-----------------+
|
||||||
|
+-----------------+----+ +----------------+ | +-> MSI-X ADDRESS 2 |
|
||||||
|
| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
|
||||||
|
+-----------------+ | |----------------+ | | | |
|
||||||
|
| BAR4 | | | | | | +-----------------+
|
||||||
|
+-----------------+ | | MW1 +---+ | +-->+ MSI-X ADDRESS 3||
|
||||||
|
| BAR5 | | | | | | +-----------------+
|
||||||
|
+-----------------+ +----->-----------------+ | | | |
|
||||||
|
EP CONTROLLER 1 | | | | +-----------------+
|
||||||
|
| | | +---->+ MSI-X ADDRESS 4 |
|
||||||
|
+----------------+ | +-----------------+
|
||||||
|
EP CONTROLLER 2 | | |
|
||||||
|
(OB SPACE) | | |
|
||||||
|
+-------> MW1 |
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
+-----------------+
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
+-----------------+
|
||||||
|
PCI Address Space
|
||||||
|
(Managed by HOST2)
|
||||||
|
|
||||||
|
Above diagram shows how the doorbell and memory window 1 is mapped so that
|
||||||
|
HOST1 can raise doorbell interrupt on HOST2 and also how HOST1 can access
|
||||||
|
buffers exposed by HOST2 using memory window1 (MW1). Here doorbell and
|
||||||
|
memory window 1 regions are allocated in EP controller 2 outbound (OB) address
|
||||||
|
space. Allocating and configuring BARs for doorbell and memory window1
|
||||||
|
is done during the initialization phase of NTB endpoint function driver.
|
||||||
|
Mapping from EP controller 2 OB space to PCI address space is done when HOST2
|
||||||
|
sends CMD_CONFIGURE_MW/CMD_CONFIGURE_DOORBELL.
|
||||||
|
|
||||||
|
Modeling Optional Memory Windows:
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
This is modeled the same was as MW1 but each of the additional memory windows
|
||||||
|
is mapped to separate BARs.
|
161
Documentation/PCI/endpoint/pci-ntb-howto.rst
Normal file
161
Documentation/PCI/endpoint/pci-ntb-howto.rst
Normal file
@ -0,0 +1,161 @@
|
|||||||
|
.. SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
===================================================================
|
||||||
|
PCI Non-Transparent Bridge (NTB) Endpoint Function (EPF) User Guide
|
||||||
|
===================================================================
|
||||||
|
|
||||||
|
:Author: Kishon Vijay Abraham I <kishon@ti.com>
|
||||||
|
|
||||||
|
This document is a guide to help users use pci-epf-ntb function driver
|
||||||
|
and ntb_hw_epf host driver for NTB functionality. The list of steps to
|
||||||
|
be followed in the host side and EP side is given below. For the hardware
|
||||||
|
configuration and internals of NTB using configurable endpoints see
|
||||||
|
Documentation/PCI/endpoint/pci-ntb-function.rst
|
||||||
|
|
||||||
|
Endpoint Device
|
||||||
|
===============
|
||||||
|
|
||||||
|
Endpoint Controller Devices
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
For implementing NTB functionality at least two endpoint controller devices
|
||||||
|
are required.
|
||||||
|
|
||||||
|
To find the list of endpoint controller devices in the system::
|
||||||
|
|
||||||
|
# ls /sys/class/pci_epc/
|
||||||
|
2900000.pcie-ep 2910000.pcie-ep
|
||||||
|
|
||||||
|
If PCI_ENDPOINT_CONFIGFS is enabled::
|
||||||
|
|
||||||
|
# ls /sys/kernel/config/pci_ep/controllers
|
||||||
|
2900000.pcie-ep 2910000.pcie-ep
|
||||||
|
|
||||||
|
|
||||||
|
Endpoint Function Drivers
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
To find the list of endpoint function drivers in the system::
|
||||||
|
|
||||||
|
# ls /sys/bus/pci-epf/drivers
|
||||||
|
pci_epf_ntb pci_epf_ntb
|
||||||
|
|
||||||
|
If PCI_ENDPOINT_CONFIGFS is enabled::
|
||||||
|
|
||||||
|
# ls /sys/kernel/config/pci_ep/functions
|
||||||
|
pci_epf_ntb pci_epf_ntb
|
||||||
|
|
||||||
|
|
||||||
|
Creating pci-epf-ntb Device
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
PCI endpoint function device can be created using the configfs. To create
|
||||||
|
pci-epf-ntb device, the following commands can be used::
|
||||||
|
|
||||||
|
# mount -t configfs none /sys/kernel/config
|
||||||
|
# cd /sys/kernel/config/pci_ep/
|
||||||
|
# mkdir functions/pci_epf_ntb/func1
|
||||||
|
|
||||||
|
The "mkdir func1" above creates the pci-epf-ntb function device that will
|
||||||
|
be probed by pci_epf_ntb driver.
|
||||||
|
|
||||||
|
The PCI endpoint framework populates the directory with the following
|
||||||
|
configurable fields::
|
||||||
|
|
||||||
|
# ls functions/pci_epf_ntb/func1
|
||||||
|
baseclass_code deviceid msi_interrupts pci-epf-ntb.0
|
||||||
|
progif_code secondary subsys_id vendorid
|
||||||
|
cache_line_size interrupt_pin msix_interrupts primary
|
||||||
|
revid subclass_code subsys_vendor_id
|
||||||
|
|
||||||
|
The PCI endpoint function driver populates these entries with default values
|
||||||
|
when the device is bound to the driver. The pci-epf-ntb driver populates
|
||||||
|
vendorid with 0xffff and interrupt_pin with 0x0001::
|
||||||
|
|
||||||
|
# cat functions/pci_epf_ntb/func1/vendorid
|
||||||
|
0xffff
|
||||||
|
# cat functions/pci_epf_ntb/func1/interrupt_pin
|
||||||
|
0x0001
|
||||||
|
|
||||||
|
|
||||||
|
Configuring pci-epf-ntb Device
|
||||||
|
-------------------------------
|
||||||
|
|
||||||
|
The user can configure the pci-epf-ntb device using its configfs entry. In order
|
||||||
|
to change the vendorid and the deviceid, the following
|
||||||
|
commands can be used::
|
||||||
|
|
||||||
|
# echo 0x104c > functions/pci_epf_ntb/func1/vendorid
|
||||||
|
# echo 0xb00d > functions/pci_epf_ntb/func1/deviceid
|
||||||
|
|
||||||
|
In order to configure NTB specific attributes, a new sub-directory to func1
|
||||||
|
should be created::
|
||||||
|
|
||||||
|
# mkdir functions/pci_epf_ntb/func1/pci_epf_ntb.0/
|
||||||
|
|
||||||
|
The NTB function driver will populate this directory with various attributes
|
||||||
|
that can be configured by the user::
|
||||||
|
|
||||||
|
# ls functions/pci_epf_ntb/func1/pci_epf_ntb.0/
|
||||||
|
db_count mw1 mw2 mw3 mw4 num_mws
|
||||||
|
spad_count
|
||||||
|
|
||||||
|
A sample configuration for NTB function is given below::
|
||||||
|
|
||||||
|
# echo 4 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/db_count
|
||||||
|
# echo 128 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/spad_count
|
||||||
|
# echo 2 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/num_mws
|
||||||
|
# echo 0x100000 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/mw1
|
||||||
|
# echo 0x100000 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/mw2
|
||||||
|
|
||||||
|
Binding pci-epf-ntb Device to EP Controller
|
||||||
|
--------------------------------------------
|
||||||
|
|
||||||
|
NTB function device should be attached to two PCI endpoint controllers
|
||||||
|
connected to the two hosts. Use the 'primary' and 'secondary' entries
|
||||||
|
inside NTB function device to attach one PCI endpoint controller to
|
||||||
|
primary interface and the other PCI endpoint controller to the secondary
|
||||||
|
interface::
|
||||||
|
|
||||||
|
# ln -s controllers/2900000.pcie-ep/ functions/pci-epf-ntb/func1/primary
|
||||||
|
# ln -s controllers/2910000.pcie-ep/ functions/pci-epf-ntb/func1/secondary
|
||||||
|
|
||||||
|
Once the above step is completed, both the PCI endpoint controllers are ready to
|
||||||
|
establish a link with the host.
|
||||||
|
|
||||||
|
|
||||||
|
Start the Link
|
||||||
|
--------------
|
||||||
|
|
||||||
|
In order for the endpoint device to establish a link with the host, the _start_
|
||||||
|
field should be populated with '1'. For NTB, both the PCI endpoint controllers
|
||||||
|
should establish link with the host::
|
||||||
|
|
||||||
|
# echo 1 > controllers/2900000.pcie-ep/start
|
||||||
|
# echo 1 > controllers/2910000.pcie-ep/start
|
||||||
|
|
||||||
|
|
||||||
|
RootComplex Device
|
||||||
|
==================
|
||||||
|
|
||||||
|
lspci Output
|
||||||
|
------------
|
||||||
|
|
||||||
|
Note that the devices listed here correspond to the values populated in
|
||||||
|
"Creating pci-epf-ntb Device" section above::
|
||||||
|
|
||||||
|
# lspci
|
||||||
|
0000:00:00.0 PCI bridge: Texas Instruments Device b00d
|
||||||
|
0000:01:00.0 RAM memory: Texas Instruments Device b00d
|
||||||
|
|
||||||
|
|
||||||
|
Using ntb_hw_epf Device
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
The host side software follows the standard NTB software architecture in Linux.
|
||||||
|
All the existing client side NTB utilities like NTB Transport Client and NTB
|
||||||
|
Netdev, NTB Ping Pong Test Client and NTB Tool Test Client can be used with NTB
|
||||||
|
function device.
|
||||||
|
|
||||||
|
For more information on NTB see
|
||||||
|
:doc:`Non-Transparent Bridge <../../driver-api/ntb>`
|
@ -132,6 +132,7 @@ properties:
|
|||||||
- enum:
|
- enum:
|
||||||
- friendlyarm,nanopc-t4
|
- friendlyarm,nanopc-t4
|
||||||
- friendlyarm,nanopi-m4
|
- friendlyarm,nanopi-m4
|
||||||
|
- friendlyarm,nanopi-m4b
|
||||||
- friendlyarm,nanopi-neo4
|
- friendlyarm,nanopi-neo4
|
||||||
- const: rockchip,rk3399
|
- const: rockchip,rk3399
|
||||||
|
|
||||||
|
@ -14,6 +14,7 @@ properties:
|
|||||||
items:
|
items:
|
||||||
- enum:
|
- enum:
|
||||||
- brcm,bcm2711-pcie # The Raspberry Pi 4
|
- brcm,bcm2711-pcie # The Raspberry Pi 4
|
||||||
|
- brcm,bcm4908-pcie
|
||||||
- brcm,bcm7211-pcie # Broadcom STB version of RPi4
|
- brcm,bcm7211-pcie # Broadcom STB version of RPi4
|
||||||
- brcm,bcm7278-pcie # Broadcom 7278 Arm
|
- brcm,bcm7278-pcie # Broadcom 7278 Arm
|
||||||
- brcm,bcm7216-pcie # Broadcom 7216 Arm
|
- brcm,bcm7216-pcie # Broadcom 7216 Arm
|
||||||
@ -63,15 +64,6 @@ properties:
|
|||||||
|
|
||||||
aspm-no-l0s: true
|
aspm-no-l0s: true
|
||||||
|
|
||||||
resets:
|
|
||||||
description: for "brcm,bcm7216-pcie", must be a valid reset
|
|
||||||
phandle pointing to the RESCAL reset controller provider node.
|
|
||||||
$ref: "/schemas/types.yaml#/definitions/phandle"
|
|
||||||
|
|
||||||
reset-names:
|
|
||||||
items:
|
|
||||||
- const: rescal
|
|
||||||
|
|
||||||
brcm,scb-sizes:
|
brcm,scb-sizes:
|
||||||
description: u64 giving the 64bit PCIe memory
|
description: u64 giving the 64bit PCIe memory
|
||||||
viewport size of a memory controller. There may be up to
|
viewport size of a memory controller. There may be up to
|
||||||
@ -98,12 +90,39 @@ required:
|
|||||||
|
|
||||||
allOf:
|
allOf:
|
||||||
- $ref: /schemas/pci/pci-bus.yaml#
|
- $ref: /schemas/pci/pci-bus.yaml#
|
||||||
|
- if:
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
contains:
|
||||||
|
const: brcm,bcm4908-pcie
|
||||||
|
then:
|
||||||
|
properties:
|
||||||
|
resets:
|
||||||
|
items:
|
||||||
|
- description: reset controller handling the PERST# signal
|
||||||
|
|
||||||
|
reset-names:
|
||||||
|
items:
|
||||||
|
- const: perst
|
||||||
|
|
||||||
|
required:
|
||||||
|
- resets
|
||||||
|
- reset-names
|
||||||
- if:
|
- if:
|
||||||
properties:
|
properties:
|
||||||
compatible:
|
compatible:
|
||||||
contains:
|
contains:
|
||||||
const: brcm,bcm7216-pcie
|
const: brcm,bcm7216-pcie
|
||||||
then:
|
then:
|
||||||
|
properties:
|
||||||
|
resets:
|
||||||
|
items:
|
||||||
|
- description: phandle pointing to the RESCAL reset controller
|
||||||
|
|
||||||
|
reset-names:
|
||||||
|
items:
|
||||||
|
- const: rescal
|
||||||
|
|
||||||
required:
|
required:
|
||||||
- resets
|
- resets
|
||||||
- reset-names
|
- reset-names
|
||||||
|
@ -26,6 +26,7 @@ Required properties:
|
|||||||
"fsl,ls1046a-pcie-ep", "fsl,ls-pcie-ep"
|
"fsl,ls1046a-pcie-ep", "fsl,ls-pcie-ep"
|
||||||
"fsl,ls1088a-pcie-ep", "fsl,ls-pcie-ep"
|
"fsl,ls1088a-pcie-ep", "fsl,ls-pcie-ep"
|
||||||
"fsl,ls2088a-pcie-ep", "fsl,ls-pcie-ep"
|
"fsl,ls2088a-pcie-ep", "fsl,ls-pcie-ep"
|
||||||
|
"fsl,lx2160ar2-pcie-ep", "fsl,ls-pcie-ep"
|
||||||
- reg: base addresses and lengths of the PCIe controller register blocks.
|
- reg: base addresses and lengths of the PCIe controller register blocks.
|
||||||
- interrupts: A list of interrupt outputs of the controller. Must contain an
|
- interrupts: A list of interrupt outputs of the controller. Must contain an
|
||||||
entry for each entry in the interrupt-names property.
|
entry for each entry in the interrupt-names property.
|
||||||
|
@ -0,0 +1,92 @@
|
|||||||
|
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||||
|
%YAML 1.2
|
||||||
|
---
|
||||||
|
$id: http://devicetree.org/schemas/pci/microchip,pcie-host.yaml#
|
||||||
|
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||||
|
|
||||||
|
title: Microchip PCIe Root Port Bridge Controller Device Tree Bindings
|
||||||
|
|
||||||
|
maintainers:
|
||||||
|
- Daire McNamara <daire.mcnamara@microchip.com>
|
||||||
|
|
||||||
|
allOf:
|
||||||
|
- $ref: /schemas/pci/pci-bus.yaml#
|
||||||
|
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
const: microchip,pcie-host-1.0 # PolarFire
|
||||||
|
|
||||||
|
reg:
|
||||||
|
maxItems: 2
|
||||||
|
|
||||||
|
reg-names:
|
||||||
|
items:
|
||||||
|
- const: cfg
|
||||||
|
- const: apb
|
||||||
|
|
||||||
|
interrupts:
|
||||||
|
minItems: 1
|
||||||
|
maxItems: 2
|
||||||
|
items:
|
||||||
|
- description: PCIe host controller
|
||||||
|
- description: builtin MSI controller
|
||||||
|
|
||||||
|
interrupt-names:
|
||||||
|
minItems: 1
|
||||||
|
maxItems: 2
|
||||||
|
items:
|
||||||
|
- const: pcie
|
||||||
|
- const: msi
|
||||||
|
|
||||||
|
ranges:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
msi-controller:
|
||||||
|
description: Identifies the node as an MSI controller.
|
||||||
|
|
||||||
|
msi-parent:
|
||||||
|
description: MSI controller the device is capable of using.
|
||||||
|
|
||||||
|
required:
|
||||||
|
- reg
|
||||||
|
- reg-names
|
||||||
|
- "#interrupt-cells"
|
||||||
|
- interrupts
|
||||||
|
- interrupt-map-mask
|
||||||
|
- interrupt-map
|
||||||
|
- msi-controller
|
||||||
|
|
||||||
|
unevaluatedProperties: false
|
||||||
|
|
||||||
|
examples:
|
||||||
|
- |
|
||||||
|
soc {
|
||||||
|
#address-cells = <2>;
|
||||||
|
#size-cells = <2>;
|
||||||
|
pcie0: pcie@2030000000 {
|
||||||
|
compatible = "microchip,pcie-host-1.0";
|
||||||
|
reg = <0x0 0x70000000 0x0 0x08000000>,
|
||||||
|
<0x0 0x43000000 0x0 0x00010000>;
|
||||||
|
reg-names = "cfg", "apb";
|
||||||
|
device_type = "pci";
|
||||||
|
#address-cells = <3>;
|
||||||
|
#size-cells = <2>;
|
||||||
|
#interrupt-cells = <1>;
|
||||||
|
interrupts = <119>;
|
||||||
|
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
|
||||||
|
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
|
||||||
|
<0 0 0 2 &pcie_intc0 1>,
|
||||||
|
<0 0 0 3 &pcie_intc0 2>,
|
||||||
|
<0 0 0 4 &pcie_intc0 3>;
|
||||||
|
interrupt-parent = <&plic0>;
|
||||||
|
msi-parent = <&pcie0>;
|
||||||
|
msi-controller;
|
||||||
|
bus-range = <0x00 0x7f>;
|
||||||
|
ranges = <0x03000000 0x0 0x78000000 0x0 0x78000000 0x0 0x04000000>;
|
||||||
|
pcie_intc0: interrupt-controller {
|
||||||
|
#address-cells = <0>;
|
||||||
|
#interrupt-cells = <1>;
|
||||||
|
interrupt-controller;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
@ -132,8 +132,8 @@
|
|||||||
- "master_bus" AXI Master clock
|
- "master_bus" AXI Master clock
|
||||||
- "slave_bus" AXI Slave clock
|
- "slave_bus" AXI Slave clock
|
||||||
|
|
||||||
-clock-names:
|
- clock-names:
|
||||||
Usage: required for sdm845 and sm8250
|
Usage: required for sdm845
|
||||||
Value type: <stringlist>
|
Value type: <stringlist>
|
||||||
Definition: Should contain the following entries
|
Definition: Should contain the following entries
|
||||||
- "aux" Auxiliary clock
|
- "aux" Auxiliary clock
|
||||||
@ -144,6 +144,19 @@
|
|||||||
- "tbu" PCIe TBU clock
|
- "tbu" PCIe TBU clock
|
||||||
- "pipe" PIPE clock
|
- "pipe" PIPE clock
|
||||||
|
|
||||||
|
- clock-names:
|
||||||
|
Usage: required for sm8250
|
||||||
|
Value type: <stringlist>
|
||||||
|
Definition: Should contain the following entries
|
||||||
|
- "aux" Auxiliary clock
|
||||||
|
- "cfg" Configuration clock
|
||||||
|
- "bus_master" Master AXI clock
|
||||||
|
- "bus_slave" Slave AXI clock
|
||||||
|
- "slave_q2a" Slave Q2A clock
|
||||||
|
- "tbu" PCIe TBU clock
|
||||||
|
- "ddrss_sf_tbu" PCIe SF TBU clock
|
||||||
|
- "pipe" PIPE clock
|
||||||
|
|
||||||
- resets:
|
- resets:
|
||||||
Usage: required
|
Usage: required
|
||||||
Value type: <prop-encoded-array>
|
Value type: <prop-encoded-array>
|
||||||
|
@ -2578,7 +2578,7 @@ L: linux-kernel@vger.kernel.org
|
|||||||
S: Maintained
|
S: Maintained
|
||||||
F: drivers/clk/keystone/
|
F: drivers/clk/keystone/
|
||||||
|
|
||||||
ARM/TEXAS INSTRUMENT KEYSTONE ClOCKSOURCE
|
ARM/TEXAS INSTRUMENT KEYSTONE CLOCKSOURCE
|
||||||
M: Santosh Shilimkar <ssantosh@kernel.org>
|
M: Santosh Shilimkar <ssantosh@kernel.org>
|
||||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||||
L: linux-kernel@vger.kernel.org
|
L: linux-kernel@vger.kernel.org
|
||||||
@ -13890,6 +13890,13 @@ S: Supported
|
|||||||
F: Documentation/devicetree/bindings/pci/mediatek*
|
F: Documentation/devicetree/bindings/pci/mediatek*
|
||||||
F: drivers/pci/controller/*mediatek*
|
F: drivers/pci/controller/*mediatek*
|
||||||
|
|
||||||
|
PCIE DRIVER FOR MICROCHIP
|
||||||
|
M: Daire McNamara <daire.mcnamara@microchip.com>
|
||||||
|
L: linux-pci@vger.kernel.org
|
||||||
|
S: Supported
|
||||||
|
F: Documentation/devicetree/bindings/pci/microchip*
|
||||||
|
F: drivers/pci/controller/*microchip*
|
||||||
|
|
||||||
PCIE DRIVER FOR QUALCOMM MSM
|
PCIE DRIVER FOR QUALCOMM MSM
|
||||||
M: Stanimir Varbanov <svarbanov@mm-sol.com>
|
M: Stanimir Varbanov <svarbanov@mm-sol.com>
|
||||||
L: linux-pci@vger.kernel.org
|
L: linux-pci@vger.kernel.org
|
||||||
|
@ -44,7 +44,7 @@ static inline int __test_facility(unsigned long nr, void *facilities)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The test_facility function uses the bit odering where the MSB is bit 0.
|
* The test_facility function uses the bit ordering where the MSB is bit 0.
|
||||||
* That makes it easier to query facility bits with the bit number as
|
* That makes it easier to query facility bits with the bit number as
|
||||||
* documented in the Principles of Operation.
|
* documented in the Principles of Operation.
|
||||||
*/
|
*/
|
||||||
|
@ -56,8 +56,6 @@ static struct acpi_scan_handler pci_root_handler = {
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
static DEFINE_MUTEX(osc_lock);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_is_root_bridge - determine whether an ACPI CA node is a PCI root bridge
|
* acpi_is_root_bridge - determine whether an ACPI CA node is a PCI root bridge
|
||||||
* @handle: the ACPI CA node in question.
|
* @handle: the ACPI CA node in question.
|
||||||
@ -223,12 +221,7 @@ static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root,
|
|||||||
|
|
||||||
static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
|
static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
|
||||||
{
|
{
|
||||||
acpi_status status;
|
return acpi_pci_query_osc(root, flags, NULL);
|
||||||
|
|
||||||
mutex_lock(&osc_lock);
|
|
||||||
status = acpi_pci_query_osc(root, flags, NULL);
|
|
||||||
mutex_unlock(&osc_lock);
|
|
||||||
return status;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
|
struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
|
||||||
@ -353,10 +346,10 @@ EXPORT_SYMBOL_GPL(acpi_get_pci_dev);
|
|||||||
* _OSC bits the BIOS has granted control of, but its contents are meaningless
|
* _OSC bits the BIOS has granted control of, but its contents are meaningless
|
||||||
* on failure.
|
* on failure.
|
||||||
**/
|
**/
|
||||||
acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
|
static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
|
||||||
{
|
{
|
||||||
struct acpi_pci_root *root;
|
struct acpi_pci_root *root;
|
||||||
acpi_status status = AE_OK;
|
acpi_status status;
|
||||||
u32 ctrl, capbuf[3];
|
u32 ctrl, capbuf[3];
|
||||||
|
|
||||||
if (!mask)
|
if (!mask)
|
||||||
@ -370,18 +363,16 @@ acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
|
|||||||
if (!root)
|
if (!root)
|
||||||
return AE_NOT_EXIST;
|
return AE_NOT_EXIST;
|
||||||
|
|
||||||
mutex_lock(&osc_lock);
|
|
||||||
|
|
||||||
*mask = ctrl | root->osc_control_set;
|
*mask = ctrl | root->osc_control_set;
|
||||||
/* No need to evaluate _OSC if the control was already granted. */
|
/* No need to evaluate _OSC if the control was already granted. */
|
||||||
if ((root->osc_control_set & ctrl) == ctrl)
|
if ((root->osc_control_set & ctrl) == ctrl)
|
||||||
goto out;
|
return AE_OK;
|
||||||
|
|
||||||
/* Need to check the available controls bits before requesting them. */
|
/* Need to check the available controls bits before requesting them. */
|
||||||
while (*mask) {
|
while (*mask) {
|
||||||
status = acpi_pci_query_osc(root, root->osc_support_set, mask);
|
status = acpi_pci_query_osc(root, root->osc_support_set, mask);
|
||||||
if (ACPI_FAILURE(status))
|
if (ACPI_FAILURE(status))
|
||||||
goto out;
|
return status;
|
||||||
if (ctrl == *mask)
|
if (ctrl == *mask)
|
||||||
break;
|
break;
|
||||||
decode_osc_control(root, "platform does not support",
|
decode_osc_control(root, "platform does not support",
|
||||||
@ -392,21 +383,19 @@ acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
|
|||||||
if ((ctrl & req) != req) {
|
if ((ctrl & req) != req) {
|
||||||
decode_osc_control(root, "not requesting control; platform does not support",
|
decode_osc_control(root, "not requesting control; platform does not support",
|
||||||
req & ~(ctrl));
|
req & ~(ctrl));
|
||||||
status = AE_SUPPORT;
|
return AE_SUPPORT;
|
||||||
goto out;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
capbuf[OSC_QUERY_DWORD] = 0;
|
capbuf[OSC_QUERY_DWORD] = 0;
|
||||||
capbuf[OSC_SUPPORT_DWORD] = root->osc_support_set;
|
capbuf[OSC_SUPPORT_DWORD] = root->osc_support_set;
|
||||||
capbuf[OSC_CONTROL_DWORD] = ctrl;
|
capbuf[OSC_CONTROL_DWORD] = ctrl;
|
||||||
status = acpi_pci_run_osc(handle, capbuf, mask);
|
status = acpi_pci_run_osc(handle, capbuf, mask);
|
||||||
if (ACPI_SUCCESS(status))
|
if (ACPI_FAILURE(status))
|
||||||
root->osc_control_set = *mask;
|
return status;
|
||||||
out:
|
|
||||||
mutex_unlock(&osc_lock);
|
root->osc_control_set = *mask;
|
||||||
return status;
|
return AE_OK;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(acpi_pci_osc_control_set);
|
|
||||||
|
|
||||||
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
|
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
|
||||||
bool is_pcie)
|
bool is_pcie)
|
||||||
@ -452,9 +441,8 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
|
|||||||
if ((status == AE_NOT_FOUND) && !is_pcie)
|
if ((status == AE_NOT_FOUND) && !is_pcie)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
dev_info(&device->dev, "_OSC failed (%s)%s\n",
|
dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
|
||||||
acpi_format_exception(status),
|
acpi_format_exception(status));
|
||||||
pcie_aspm_support_enabled() ? "; disabling ASPM" : "");
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -510,7 +498,7 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
|
|||||||
} else {
|
} else {
|
||||||
decode_osc_control(root, "OS requested", requested);
|
decode_osc_control(root, "OS requested", requested);
|
||||||
decode_osc_control(root, "platform willing to grant", control);
|
decode_osc_control(root, "platform willing to grant", control);
|
||||||
dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n",
|
dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
|
||||||
acpi_format_exception(status));
|
acpi_format_exception(status));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -141,7 +141,7 @@ static void qxl_drm_release(struct drm_device *dev)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* TODO: qxl_device_fini() call should be in qxl_pci_remove(),
|
* TODO: qxl_device_fini() call should be in qxl_pci_remove(),
|
||||||
* reodering qxl_modeset_fini() + qxl_device_fini() calls is
|
* reordering qxl_modeset_fini() + qxl_device_fini() calls is
|
||||||
* non-trivial though.
|
* non-trivial though.
|
||||||
*/
|
*/
|
||||||
qxl_modeset_fini(qdev);
|
qxl_modeset_fini(qdev);
|
||||||
|
@ -68,7 +68,6 @@
|
|||||||
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
|
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
|
||||||
#define FLAG_USE_DMA BIT(0)
|
#define FLAG_USE_DMA BIT(0)
|
||||||
|
|
||||||
#define PCI_DEVICE_ID_TI_J721E 0xb00d
|
|
||||||
#define PCI_DEVICE_ID_TI_AM654 0xb00c
|
#define PCI_DEVICE_ID_TI_AM654 0xb00c
|
||||||
#define PCI_DEVICE_ID_LS1088A 0x80c0
|
#define PCI_DEVICE_ID_LS1088A 0x80c0
|
||||||
|
|
||||||
|
@ -870,7 +870,7 @@ struct iwl_fw_dbg_trigger_time_event {
|
|||||||
* tx_bar: tid bitmap to configure on what tid the trigger should occur
|
* tx_bar: tid bitmap to configure on what tid the trigger should occur
|
||||||
* when a BAR is send (for an Rx BlocAck session).
|
* when a BAR is send (for an Rx BlocAck session).
|
||||||
* frame_timeout: tid bitmap to configure on what tid the trigger should occur
|
* frame_timeout: tid bitmap to configure on what tid the trigger should occur
|
||||||
* when a frame times out in the reodering buffer.
|
* when a frame times out in the reordering buffer.
|
||||||
*/
|
*/
|
||||||
struct iwl_fw_dbg_trigger_ba {
|
struct iwl_fw_dbg_trigger_ba {
|
||||||
__le16 rx_ba_start;
|
__le16 rx_ba_start;
|
||||||
|
@ -2,4 +2,5 @@
|
|||||||
source "drivers/ntb/hw/amd/Kconfig"
|
source "drivers/ntb/hw/amd/Kconfig"
|
||||||
source "drivers/ntb/hw/idt/Kconfig"
|
source "drivers/ntb/hw/idt/Kconfig"
|
||||||
source "drivers/ntb/hw/intel/Kconfig"
|
source "drivers/ntb/hw/intel/Kconfig"
|
||||||
|
source "drivers/ntb/hw/epf/Kconfig"
|
||||||
source "drivers/ntb/hw/mscc/Kconfig"
|
source "drivers/ntb/hw/mscc/Kconfig"
|
||||||
|
@ -2,4 +2,5 @@
|
|||||||
obj-$(CONFIG_NTB_AMD) += amd/
|
obj-$(CONFIG_NTB_AMD) += amd/
|
||||||
obj-$(CONFIG_NTB_IDT) += idt/
|
obj-$(CONFIG_NTB_IDT) += idt/
|
||||||
obj-$(CONFIG_NTB_INTEL) += intel/
|
obj-$(CONFIG_NTB_INTEL) += intel/
|
||||||
|
obj-$(CONFIG_NTB_EPF) += epf/
|
||||||
obj-$(CONFIG_NTB_SWITCHTEC) += mscc/
|
obj-$(CONFIG_NTB_SWITCHTEC) += mscc/
|
||||||
|
6
drivers/ntb/hw/epf/Kconfig
Normal file
6
drivers/ntb/hw/epf/Kconfig
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
config NTB_EPF
|
||||||
|
tristate "Generic EPF Non-Transparent Bridge support"
|
||||||
|
depends on m
|
||||||
|
help
|
||||||
|
This driver supports EPF NTB on configurable endpoint.
|
||||||
|
If unsure, say N.
|
1
drivers/ntb/hw/epf/Makefile
Normal file
1
drivers/ntb/hw/epf/Makefile
Normal file
@ -0,0 +1 @@
|
|||||||
|
obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
|
753
drivers/ntb/hw/epf/ntb_hw_epf.c
Normal file
753
drivers/ntb/hw/epf/ntb_hw_epf.c
Normal file
@ -0,0 +1,753 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/**
|
||||||
|
* Host side endpoint driver to implement Non-Transparent Bridge functionality
|
||||||
|
*
|
||||||
|
* Copyright (C) 2020 Texas Instruments
|
||||||
|
* Author: Kishon Vijay Abraham I <kishon@ti.com>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/delay.h>
|
||||||
|
#include <linux/module.h>
|
||||||
|
#include <linux/pci.h>
|
||||||
|
#include <linux/slab.h>
|
||||||
|
#include <linux/ntb.h>
|
||||||
|
|
||||||
|
#define NTB_EPF_COMMAND 0x0
|
||||||
|
#define CMD_CONFIGURE_DOORBELL 1
|
||||||
|
#define CMD_TEARDOWN_DOORBELL 2
|
||||||
|
#define CMD_CONFIGURE_MW 3
|
||||||
|
#define CMD_TEARDOWN_MW 4
|
||||||
|
#define CMD_LINK_UP 5
|
||||||
|
#define CMD_LINK_DOWN 6
|
||||||
|
|
||||||
|
#define NTB_EPF_ARGUMENT 0x4
|
||||||
|
#define MSIX_ENABLE BIT(16)
|
||||||
|
|
||||||
|
#define NTB_EPF_CMD_STATUS 0x8
|
||||||
|
#define COMMAND_STATUS_OK 1
|
||||||
|
#define COMMAND_STATUS_ERROR 2
|
||||||
|
|
||||||
|
#define NTB_EPF_LINK_STATUS 0x0A
|
||||||
|
#define LINK_STATUS_UP BIT(0)
|
||||||
|
|
||||||
|
#define NTB_EPF_TOPOLOGY 0x0C
|
||||||
|
#define NTB_EPF_LOWER_ADDR 0x10
|
||||||
|
#define NTB_EPF_UPPER_ADDR 0x14
|
||||||
|
#define NTB_EPF_LOWER_SIZE 0x18
|
||||||
|
#define NTB_EPF_UPPER_SIZE 0x1C
|
||||||
|
#define NTB_EPF_MW_COUNT 0x20
|
||||||
|
#define NTB_EPF_MW1_OFFSET 0x24
|
||||||
|
#define NTB_EPF_SPAD_OFFSET 0x28
|
||||||
|
#define NTB_EPF_SPAD_COUNT 0x2C
|
||||||
|
#define NTB_EPF_DB_ENTRY_SIZE 0x30
|
||||||
|
#define NTB_EPF_DB_DATA(n) (0x34 + (n) * 4)
|
||||||
|
#define NTB_EPF_DB_OFFSET(n) (0xB4 + (n) * 4)
|
||||||
|
|
||||||
|
#define NTB_EPF_MIN_DB_COUNT 3
|
||||||
|
#define NTB_EPF_MAX_DB_COUNT 31
|
||||||
|
#define NTB_EPF_MW_OFFSET 2
|
||||||
|
|
||||||
|
#define NTB_EPF_COMMAND_TIMEOUT 1000 /* 1 Sec */
|
||||||
|
|
||||||
|
enum pci_barno {
|
||||||
|
BAR_0,
|
||||||
|
BAR_1,
|
||||||
|
BAR_2,
|
||||||
|
BAR_3,
|
||||||
|
BAR_4,
|
||||||
|
BAR_5,
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ntb_epf_dev {
|
||||||
|
struct ntb_dev ntb;
|
||||||
|
struct device *dev;
|
||||||
|
/* Mutex to protect providing commands to NTB EPF */
|
||||||
|
struct mutex cmd_lock;
|
||||||
|
|
||||||
|
enum pci_barno ctrl_reg_bar;
|
||||||
|
enum pci_barno peer_spad_reg_bar;
|
||||||
|
enum pci_barno db_reg_bar;
|
||||||
|
|
||||||
|
unsigned int mw_count;
|
||||||
|
unsigned int spad_count;
|
||||||
|
unsigned int db_count;
|
||||||
|
|
||||||
|
void __iomem *ctrl_reg;
|
||||||
|
void __iomem *db_reg;
|
||||||
|
void __iomem *peer_spad_reg;
|
||||||
|
|
||||||
|
unsigned int self_spad;
|
||||||
|
unsigned int peer_spad;
|
||||||
|
|
||||||
|
int db_val;
|
||||||
|
u64 db_valid_mask;
|
||||||
|
};
|
||||||
|
|
||||||
|
#define ntb_ndev(__ntb) container_of(__ntb, struct ntb_epf_dev, ntb)
|
||||||
|
|
||||||
|
struct ntb_epf_data {
|
||||||
|
/* BAR that contains both control region and self spad region */
|
||||||
|
enum pci_barno ctrl_reg_bar;
|
||||||
|
/* BAR that contains peer spad region */
|
||||||
|
enum pci_barno peer_spad_reg_bar;
|
||||||
|
/* BAR that contains Doorbell region and Memory window '1' */
|
||||||
|
enum pci_barno db_reg_bar;
|
||||||
|
};
|
||||||
|
|
||||||
|
static int ntb_epf_send_command(struct ntb_epf_dev *ndev, u32 command,
|
||||||
|
u32 argument)
|
||||||
|
{
|
||||||
|
ktime_t timeout;
|
||||||
|
bool timedout;
|
||||||
|
int ret = 0;
|
||||||
|
u32 status;
|
||||||
|
|
||||||
|
mutex_lock(&ndev->cmd_lock);
|
||||||
|
writel(argument, ndev->ctrl_reg + NTB_EPF_ARGUMENT);
|
||||||
|
writel(command, ndev->ctrl_reg + NTB_EPF_COMMAND);
|
||||||
|
|
||||||
|
timeout = ktime_add_ms(ktime_get(), NTB_EPF_COMMAND_TIMEOUT);
|
||||||
|
while (1) {
|
||||||
|
timedout = ktime_after(ktime_get(), timeout);
|
||||||
|
status = readw(ndev->ctrl_reg + NTB_EPF_CMD_STATUS);
|
||||||
|
|
||||||
|
if (status == COMMAND_STATUS_ERROR) {
|
||||||
|
ret = -EINVAL;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (status == COMMAND_STATUS_OK)
|
||||||
|
break;
|
||||||
|
|
||||||
|
if (WARN_ON(timedout)) {
|
||||||
|
ret = -ETIMEDOUT;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
usleep_range(5, 10);
|
||||||
|
}
|
||||||
|
|
||||||
|
writew(0, ndev->ctrl_reg + NTB_EPF_CMD_STATUS);
|
||||||
|
mutex_unlock(&ndev->cmd_lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_mw_to_bar(struct ntb_epf_dev *ndev, int idx)
|
||||||
|
{
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
|
||||||
|
if (idx < 0 || idx > ndev->mw_count) {
|
||||||
|
dev_err(dev, "Unsupported Memory Window index %d\n", idx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
return idx + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_mw_count(struct ntb_dev *ntb, int pidx)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
|
||||||
|
if (pidx != NTB_DEF_PEER_IDX) {
|
||||||
|
dev_err(dev, "Unsupported Peer ID %d\n", pidx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
return ndev->mw_count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_mw_get_align(struct ntb_dev *ntb, int pidx, int idx,
|
||||||
|
resource_size_t *addr_align,
|
||||||
|
resource_size_t *size_align,
|
||||||
|
resource_size_t *size_max)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
int bar;
|
||||||
|
|
||||||
|
if (pidx != NTB_DEF_PEER_IDX) {
|
||||||
|
dev_err(dev, "Unsupported Peer ID %d\n", pidx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
bar = ntb_epf_mw_to_bar(ndev, idx);
|
||||||
|
if (bar < 0)
|
||||||
|
return bar;
|
||||||
|
|
||||||
|
if (addr_align)
|
||||||
|
*addr_align = SZ_4K;
|
||||||
|
|
||||||
|
if (size_align)
|
||||||
|
*size_align = 1;
|
||||||
|
|
||||||
|
if (size_max)
|
||||||
|
*size_max = pci_resource_len(ndev->ntb.pdev, bar);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static u64 ntb_epf_link_is_up(struct ntb_dev *ntb,
|
||||||
|
enum ntb_speed *speed,
|
||||||
|
enum ntb_width *width)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
u32 status;
|
||||||
|
|
||||||
|
status = readw(ndev->ctrl_reg + NTB_EPF_LINK_STATUS);
|
||||||
|
|
||||||
|
return status & LINK_STATUS_UP;
|
||||||
|
}
|
||||||
|
|
||||||
|
static u32 ntb_epf_spad_read(struct ntb_dev *ntb, int idx)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
u32 offset;
|
||||||
|
|
||||||
|
if (idx < 0 || idx >= ndev->spad_count) {
|
||||||
|
dev_err(dev, "READ: Invalid ScratchPad Index %d\n", idx);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
offset = readl(ndev->ctrl_reg + NTB_EPF_SPAD_OFFSET);
|
||||||
|
offset += (idx << 2);
|
||||||
|
|
||||||
|
return readl(ndev->ctrl_reg + offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_spad_write(struct ntb_dev *ntb,
|
||||||
|
int idx, u32 val)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
u32 offset;
|
||||||
|
|
||||||
|
if (idx < 0 || idx >= ndev->spad_count) {
|
||||||
|
dev_err(dev, "WRITE: Invalid ScratchPad Index %d\n", idx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
offset = readl(ndev->ctrl_reg + NTB_EPF_SPAD_OFFSET);
|
||||||
|
offset += (idx << 2);
|
||||||
|
writel(val, ndev->ctrl_reg + offset);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static u32 ntb_epf_peer_spad_read(struct ntb_dev *ntb, int pidx, int idx)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
u32 offset;
|
||||||
|
|
||||||
|
if (pidx != NTB_DEF_PEER_IDX) {
|
||||||
|
dev_err(dev, "Unsupported Peer ID %d\n", pidx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (idx < 0 || idx >= ndev->spad_count) {
|
||||||
|
dev_err(dev, "WRITE: Invalid Peer ScratchPad Index %d\n", idx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
offset = (idx << 2);
|
||||||
|
return readl(ndev->peer_spad_reg + offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_peer_spad_write(struct ntb_dev *ntb, int pidx,
|
||||||
|
int idx, u32 val)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
u32 offset;
|
||||||
|
|
||||||
|
if (pidx != NTB_DEF_PEER_IDX) {
|
||||||
|
dev_err(dev, "Unsupported Peer ID %d\n", pidx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (idx < 0 || idx >= ndev->spad_count) {
|
||||||
|
dev_err(dev, "WRITE: Invalid Peer ScratchPad Index %d\n", idx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
offset = (idx << 2);
|
||||||
|
writel(val, ndev->peer_spad_reg + offset);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_link_enable(struct ntb_dev *ntb,
|
||||||
|
enum ntb_speed max_speed,
|
||||||
|
enum ntb_width max_width)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = ntb_epf_send_command(ndev, CMD_LINK_UP, 0);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Fail to enable link\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_link_disable(struct ntb_dev *ntb)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = ntb_epf_send_command(ndev, CMD_LINK_DOWN, 0);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Fail to disable link\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static irqreturn_t ntb_epf_vec_isr(int irq, void *dev)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = dev;
|
||||||
|
int irq_no;
|
||||||
|
|
||||||
|
irq_no = irq - pci_irq_vector(ndev->ntb.pdev, 0);
|
||||||
|
ndev->db_val = irq_no + 1;
|
||||||
|
|
||||||
|
if (irq_no == 0)
|
||||||
|
ntb_link_event(&ndev->ntb);
|
||||||
|
else
|
||||||
|
ntb_db_event(&ndev->ntb, irq_no);
|
||||||
|
|
||||||
|
return IRQ_HANDLED;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_init_isr(struct ntb_epf_dev *ndev, int msi_min, int msi_max)
|
||||||
|
{
|
||||||
|
struct pci_dev *pdev = ndev->ntb.pdev;
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
u32 argument = MSIX_ENABLE;
|
||||||
|
int irq;
|
||||||
|
int ret;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max, PCI_IRQ_MSIX);
|
||||||
|
if (irq < 0) {
|
||||||
|
dev_dbg(dev, "Failed to get MSIX interrupts\n");
|
||||||
|
irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max,
|
||||||
|
PCI_IRQ_MSI);
|
||||||
|
if (irq < 0) {
|
||||||
|
dev_err(dev, "Failed to get MSI interrupts\n");
|
||||||
|
return irq;
|
||||||
|
}
|
||||||
|
argument &= ~MSIX_ENABLE;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (i = 0; i < irq; i++) {
|
||||||
|
ret = request_irq(pci_irq_vector(pdev, i), ntb_epf_vec_isr,
|
||||||
|
0, "ntb_epf", ndev);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to request irq\n");
|
||||||
|
goto err_request_irq;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ndev->db_count = irq - 1;
|
||||||
|
|
||||||
|
ret = ntb_epf_send_command(ndev, CMD_CONFIGURE_DOORBELL,
|
||||||
|
argument | irq);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to configure doorbell\n");
|
||||||
|
goto err_configure_db;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_configure_db:
|
||||||
|
for (i = 0; i < ndev->db_count + 1; i++)
|
||||||
|
free_irq(pci_irq_vector(pdev, i), ndev);
|
||||||
|
|
||||||
|
err_request_irq:
|
||||||
|
pci_free_irq_vectors(pdev);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_peer_mw_count(struct ntb_dev *ntb)
|
||||||
|
{
|
||||||
|
return ntb_ndev(ntb)->mw_count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_spad_count(struct ntb_dev *ntb)
|
||||||
|
{
|
||||||
|
return ntb_ndev(ntb)->spad_count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static u64 ntb_epf_db_valid_mask(struct ntb_dev *ntb)
|
||||||
|
{
|
||||||
|
return ntb_ndev(ntb)->db_valid_mask;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_db_set_mask(struct ntb_dev *ntb, u64 db_bits)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx,
|
||||||
|
dma_addr_t addr, resource_size_t size)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
resource_size_t mw_size;
|
||||||
|
int bar;
|
||||||
|
|
||||||
|
if (pidx != NTB_DEF_PEER_IDX) {
|
||||||
|
dev_err(dev, "Unsupported Peer ID %d\n", pidx);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
bar = idx + NTB_EPF_MW_OFFSET;
|
||||||
|
|
||||||
|
mw_size = pci_resource_len(ntb->pdev, bar);
|
||||||
|
|
||||||
|
if (size > mw_size) {
|
||||||
|
dev_err(dev, "Size:%pa is greater than the MW size %pa\n",
|
||||||
|
&size, &mw_size);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
writel(lower_32_bits(addr), ndev->ctrl_reg + NTB_EPF_LOWER_ADDR);
|
||||||
|
writel(upper_32_bits(addr), ndev->ctrl_reg + NTB_EPF_UPPER_ADDR);
|
||||||
|
writel(lower_32_bits(size), ndev->ctrl_reg + NTB_EPF_LOWER_SIZE);
|
||||||
|
writel(upper_32_bits(size), ndev->ctrl_reg + NTB_EPF_UPPER_SIZE);
|
||||||
|
ntb_epf_send_command(ndev, CMD_CONFIGURE_MW, idx);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_mw_clear_trans(struct ntb_dev *ntb, int pidx, int idx)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
ntb_epf_send_command(ndev, CMD_TEARDOWN_MW, idx);
|
||||||
|
if (ret)
|
||||||
|
dev_err(dev, "Failed to teardown memory window\n");
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_peer_mw_get_addr(struct ntb_dev *ntb, int idx,
|
||||||
|
phys_addr_t *base, resource_size_t *size)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
u32 offset = 0;
|
||||||
|
int bar;
|
||||||
|
|
||||||
|
if (idx == 0)
|
||||||
|
offset = readl(ndev->ctrl_reg + NTB_EPF_MW1_OFFSET);
|
||||||
|
|
||||||
|
bar = idx + NTB_EPF_MW_OFFSET;
|
||||||
|
|
||||||
|
if (base)
|
||||||
|
*base = pci_resource_start(ndev->ntb.pdev, bar) + offset;
|
||||||
|
|
||||||
|
if (size)
|
||||||
|
*size = pci_resource_len(ndev->ntb.pdev, bar) - offset;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_peer_db_set(struct ntb_dev *ntb, u64 db_bits)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
u32 interrupt_num = ffs(db_bits) + 1;
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
u32 db_entry_size;
|
||||||
|
u32 db_offset;
|
||||||
|
u32 db_data;
|
||||||
|
|
||||||
|
if (interrupt_num > ndev->db_count) {
|
||||||
|
dev_err(dev, "DB interrupt %d greater than Max Supported %d\n",
|
||||||
|
interrupt_num, ndev->db_count);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
db_entry_size = readl(ndev->ctrl_reg + NTB_EPF_DB_ENTRY_SIZE);
|
||||||
|
|
||||||
|
db_data = readl(ndev->ctrl_reg + NTB_EPF_DB_DATA(interrupt_num));
|
||||||
|
db_offset = readl(ndev->ctrl_reg + NTB_EPF_DB_OFFSET(interrupt_num));
|
||||||
|
writel(db_data, ndev->db_reg + (db_entry_size * interrupt_num) +
|
||||||
|
db_offset);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static u64 ntb_epf_db_read(struct ntb_dev *ntb)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
|
||||||
|
return ndev->db_val;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_db_clear_mask(struct ntb_dev *ntb, u64 db_bits)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_db_clear(struct ntb_dev *ntb, u64 db_bits)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = ntb_ndev(ntb);
|
||||||
|
|
||||||
|
ndev->db_val = 0;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct ntb_dev_ops ntb_epf_ops = {
|
||||||
|
.mw_count = ntb_epf_mw_count,
|
||||||
|
.spad_count = ntb_epf_spad_count,
|
||||||
|
.peer_mw_count = ntb_epf_peer_mw_count,
|
||||||
|
.db_valid_mask = ntb_epf_db_valid_mask,
|
||||||
|
.db_set_mask = ntb_epf_db_set_mask,
|
||||||
|
.mw_set_trans = ntb_epf_mw_set_trans,
|
||||||
|
.mw_clear_trans = ntb_epf_mw_clear_trans,
|
||||||
|
.peer_mw_get_addr = ntb_epf_peer_mw_get_addr,
|
||||||
|
.link_enable = ntb_epf_link_enable,
|
||||||
|
.spad_read = ntb_epf_spad_read,
|
||||||
|
.spad_write = ntb_epf_spad_write,
|
||||||
|
.peer_spad_read = ntb_epf_peer_spad_read,
|
||||||
|
.peer_spad_write = ntb_epf_peer_spad_write,
|
||||||
|
.peer_db_set = ntb_epf_peer_db_set,
|
||||||
|
.db_read = ntb_epf_db_read,
|
||||||
|
.mw_get_align = ntb_epf_mw_get_align,
|
||||||
|
.link_is_up = ntb_epf_link_is_up,
|
||||||
|
.db_clear_mask = ntb_epf_db_clear_mask,
|
||||||
|
.db_clear = ntb_epf_db_clear,
|
||||||
|
.link_disable = ntb_epf_link_disable,
|
||||||
|
};
|
||||||
|
|
||||||
|
static inline void ntb_epf_init_struct(struct ntb_epf_dev *ndev,
|
||||||
|
struct pci_dev *pdev)
|
||||||
|
{
|
||||||
|
ndev->ntb.pdev = pdev;
|
||||||
|
ndev->ntb.topo = NTB_TOPO_NONE;
|
||||||
|
ndev->ntb.ops = &ntb_epf_ops;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_init_dev(struct ntb_epf_dev *ndev)
|
||||||
|
{
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
/* One Link interrupt and rest doorbell interrupt */
|
||||||
|
ret = ntb_epf_init_isr(ndev, NTB_EPF_MIN_DB_COUNT + 1,
|
||||||
|
NTB_EPF_MAX_DB_COUNT + 1);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to init ISR\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
ndev->db_valid_mask = BIT_ULL(ndev->db_count) - 1;
|
||||||
|
ndev->mw_count = readl(ndev->ctrl_reg + NTB_EPF_MW_COUNT);
|
||||||
|
ndev->spad_count = readl(ndev->ctrl_reg + NTB_EPF_SPAD_COUNT);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
|
||||||
|
struct pci_dev *pdev)
|
||||||
|
{
|
||||||
|
struct device *dev = ndev->dev;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
pci_set_drvdata(pdev, ndev);
|
||||||
|
|
||||||
|
ret = pci_enable_device(pdev);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Cannot enable PCI device\n");
|
||||||
|
goto err_pci_enable;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = pci_request_regions(pdev, "ntb");
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Cannot obtain PCI resources\n");
|
||||||
|
goto err_pci_regions;
|
||||||
|
}
|
||||||
|
|
||||||
|
pci_set_master(pdev);
|
||||||
|
|
||||||
|
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
|
||||||
|
if (ret) {
|
||||||
|
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Cannot set DMA mask\n");
|
||||||
|
goto err_dma_mask;
|
||||||
|
}
|
||||||
|
dev_warn(&pdev->dev, "Cannot DMA highmem\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
ndev->ctrl_reg = pci_iomap(pdev, ndev->ctrl_reg_bar, 0);
|
||||||
|
if (!ndev->ctrl_reg) {
|
||||||
|
ret = -EIO;
|
||||||
|
goto err_dma_mask;
|
||||||
|
}
|
||||||
|
|
||||||
|
ndev->peer_spad_reg = pci_iomap(pdev, ndev->peer_spad_reg_bar, 0);
|
||||||
|
if (!ndev->peer_spad_reg) {
|
||||||
|
ret = -EIO;
|
||||||
|
goto err_dma_mask;
|
||||||
|
}
|
||||||
|
|
||||||
|
ndev->db_reg = pci_iomap(pdev, ndev->db_reg_bar, 0);
|
||||||
|
if (!ndev->db_reg) {
|
||||||
|
ret = -EIO;
|
||||||
|
goto err_dma_mask;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_dma_mask:
|
||||||
|
pci_clear_master(pdev);
|
||||||
|
|
||||||
|
err_pci_regions:
|
||||||
|
pci_disable_device(pdev);
|
||||||
|
|
||||||
|
err_pci_enable:
|
||||||
|
pci_set_drvdata(pdev, NULL);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ntb_epf_deinit_pci(struct ntb_epf_dev *ndev)
|
||||||
|
{
|
||||||
|
struct pci_dev *pdev = ndev->ntb.pdev;
|
||||||
|
|
||||||
|
pci_iounmap(pdev, ndev->ctrl_reg);
|
||||||
|
pci_iounmap(pdev, ndev->peer_spad_reg);
|
||||||
|
pci_iounmap(pdev, ndev->db_reg);
|
||||||
|
|
||||||
|
pci_clear_master(pdev);
|
||||||
|
pci_release_regions(pdev);
|
||||||
|
pci_disable_device(pdev);
|
||||||
|
pci_set_drvdata(pdev, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ntb_epf_cleanup_isr(struct ntb_epf_dev *ndev)
|
||||||
|
{
|
||||||
|
struct pci_dev *pdev = ndev->ntb.pdev;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
ntb_epf_send_command(ndev, CMD_TEARDOWN_DOORBELL, ndev->db_count + 1);
|
||||||
|
|
||||||
|
for (i = 0; i < ndev->db_count + 1; i++)
|
||||||
|
free_irq(pci_irq_vector(pdev, i), ndev);
|
||||||
|
pci_free_irq_vectors(pdev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ntb_epf_pci_probe(struct pci_dev *pdev,
|
||||||
|
const struct pci_device_id *id)
|
||||||
|
{
|
||||||
|
enum pci_barno peer_spad_reg_bar = BAR_1;
|
||||||
|
enum pci_barno ctrl_reg_bar = BAR_0;
|
||||||
|
enum pci_barno db_reg_bar = BAR_2;
|
||||||
|
struct device *dev = &pdev->dev;
|
||||||
|
struct ntb_epf_data *data;
|
||||||
|
struct ntb_epf_dev *ndev;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (pci_is_bridge(pdev))
|
||||||
|
return -ENODEV;
|
||||||
|
|
||||||
|
ndev = devm_kzalloc(dev, sizeof(*ndev), GFP_KERNEL);
|
||||||
|
if (!ndev)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
data = (struct ntb_epf_data *)id->driver_data;
|
||||||
|
if (data) {
|
||||||
|
if (data->peer_spad_reg_bar)
|
||||||
|
peer_spad_reg_bar = data->peer_spad_reg_bar;
|
||||||
|
if (data->ctrl_reg_bar)
|
||||||
|
ctrl_reg_bar = data->ctrl_reg_bar;
|
||||||
|
if (data->db_reg_bar)
|
||||||
|
db_reg_bar = data->db_reg_bar;
|
||||||
|
}
|
||||||
|
|
||||||
|
ndev->peer_spad_reg_bar = peer_spad_reg_bar;
|
||||||
|
ndev->ctrl_reg_bar = ctrl_reg_bar;
|
||||||
|
ndev->db_reg_bar = db_reg_bar;
|
||||||
|
ndev->dev = dev;
|
||||||
|
|
||||||
|
ntb_epf_init_struct(ndev, pdev);
|
||||||
|
mutex_init(&ndev->cmd_lock);
|
||||||
|
|
||||||
|
ret = ntb_epf_init_pci(ndev, pdev);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to init PCI\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = ntb_epf_init_dev(ndev);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to init device\n");
|
||||||
|
goto err_init_dev;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = ntb_register_device(&ndev->ntb);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to register NTB device\n");
|
||||||
|
goto err_register_dev;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_register_dev:
|
||||||
|
ntb_epf_cleanup_isr(ndev);
|
||||||
|
|
||||||
|
err_init_dev:
|
||||||
|
ntb_epf_deinit_pci(ndev);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ntb_epf_pci_remove(struct pci_dev *pdev)
|
||||||
|
{
|
||||||
|
struct ntb_epf_dev *ndev = pci_get_drvdata(pdev);
|
||||||
|
|
||||||
|
ntb_unregister_device(&ndev->ntb);
|
||||||
|
ntb_epf_cleanup_isr(ndev);
|
||||||
|
ntb_epf_deinit_pci(ndev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct ntb_epf_data j721e_data = {
|
||||||
|
.ctrl_reg_bar = BAR_0,
|
||||||
|
.peer_spad_reg_bar = BAR_1,
|
||||||
|
.db_reg_bar = BAR_2,
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct pci_device_id ntb_epf_pci_tbl[] = {
|
||||||
|
{
|
||||||
|
PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
|
||||||
|
.class = PCI_CLASS_MEMORY_RAM << 8, .class_mask = 0xffff00,
|
||||||
|
.driver_data = (kernel_ulong_t)&j721e_data,
|
||||||
|
},
|
||||||
|
{ },
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct pci_driver ntb_epf_pci_driver = {
|
||||||
|
.name = KBUILD_MODNAME,
|
||||||
|
.id_table = ntb_epf_pci_tbl,
|
||||||
|
.probe = ntb_epf_pci_probe,
|
||||||
|
.remove = ntb_epf_pci_remove,
|
||||||
|
};
|
||||||
|
module_pci_driver(ntb_epf_pci_driver);
|
||||||
|
|
||||||
|
MODULE_DESCRIPTION("PCI ENDPOINT NTB HOST DRIVER");
|
||||||
|
MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>");
|
||||||
|
MODULE_LICENSE("GPL v2");
|
@ -36,4 +36,4 @@ obj-$(CONFIG_PCI_ENDPOINT) += endpoint/
|
|||||||
obj-y += controller/
|
obj-y += controller/
|
||||||
obj-y += switch/
|
obj-y += switch/
|
||||||
|
|
||||||
ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
|
subdir-ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
|
||||||
|
@ -55,15 +55,6 @@ config PCI_RCAR_GEN2
|
|||||||
There are 3 internal PCI controllers available with a single
|
There are 3 internal PCI controllers available with a single
|
||||||
built-in EHCI/OHCI host controller present on each one.
|
built-in EHCI/OHCI host controller present on each one.
|
||||||
|
|
||||||
config PCIE_RCAR
|
|
||||||
bool "Renesas R-Car PCIe controller"
|
|
||||||
depends on ARCH_RENESAS || COMPILE_TEST
|
|
||||||
depends on PCI_MSI_IRQ_DOMAIN
|
|
||||||
select PCIE_RCAR_HOST
|
|
||||||
help
|
|
||||||
Say Y here if you want PCIe controller support on R-Car SoCs.
|
|
||||||
This option will be removed after arm64 defconfig is updated.
|
|
||||||
|
|
||||||
config PCIE_RCAR_HOST
|
config PCIE_RCAR_HOST
|
||||||
bool "Renesas R-Car PCIe host controller"
|
bool "Renesas R-Car PCIe host controller"
|
||||||
depends on ARCH_RENESAS || COMPILE_TEST
|
depends on ARCH_RENESAS || COMPILE_TEST
|
||||||
@ -242,20 +233,6 @@ config PCIE_MEDIATEK
|
|||||||
Say Y here if you want to enable PCIe controller support on
|
Say Y here if you want to enable PCIe controller support on
|
||||||
MediaTek SoCs.
|
MediaTek SoCs.
|
||||||
|
|
||||||
config PCIE_TANGO_SMP8759
|
|
||||||
bool "Tango SMP8759 PCIe controller (DANGEROUS)"
|
|
||||||
depends on ARCH_TANGO && PCI_MSI && OF
|
|
||||||
depends on BROKEN
|
|
||||||
select PCI_HOST_COMMON
|
|
||||||
help
|
|
||||||
Say Y here to enable PCIe controller support for Sigma Designs
|
|
||||||
Tango SMP8759-based systems.
|
|
||||||
|
|
||||||
Note: The SMP8759 controller multiplexes PCI config and MMIO
|
|
||||||
accesses, and Linux doesn't provide a way to serialize them.
|
|
||||||
This can lead to data corruption if drivers perform concurrent
|
|
||||||
config and MMIO accesses.
|
|
||||||
|
|
||||||
config VMD
|
config VMD
|
||||||
depends on PCI_MSI && X86_64 && SRCU
|
depends on PCI_MSI && X86_64 && SRCU
|
||||||
tristate "Intel Volume Management Device Driver"
|
tristate "Intel Volume Management Device Driver"
|
||||||
@ -273,7 +250,7 @@ config VMD
|
|||||||
|
|
||||||
config PCIE_BRCMSTB
|
config PCIE_BRCMSTB
|
||||||
tristate "Broadcom Brcmstb PCIe host controller"
|
tristate "Broadcom Brcmstb PCIe host controller"
|
||||||
depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
|
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCM4908 || COMPILE_TEST
|
||||||
depends on OF
|
depends on OF
|
||||||
depends on PCI_MSI_IRQ_DOMAIN
|
depends on PCI_MSI_IRQ_DOMAIN
|
||||||
default ARCH_BRCMSTB
|
default ARCH_BRCMSTB
|
||||||
@ -298,6 +275,16 @@ config PCI_LOONGSON
|
|||||||
Say Y here if you want to enable PCI controller support on
|
Say Y here if you want to enable PCI controller support on
|
||||||
Loongson systems.
|
Loongson systems.
|
||||||
|
|
||||||
|
config PCIE_MICROCHIP_HOST
|
||||||
|
bool "Microchip AXI PCIe host bridge support"
|
||||||
|
depends on PCI_MSI && OF
|
||||||
|
select PCI_MSI_IRQ_DOMAIN
|
||||||
|
select GENERIC_MSI_IRQ_DOMAIN
|
||||||
|
select PCI_HOST_COMMON
|
||||||
|
help
|
||||||
|
Say Y here if you want kernel to support the Microchip AXI PCIe
|
||||||
|
Host Bridge driver.
|
||||||
|
|
||||||
config PCIE_HISI_ERR
|
config PCIE_HISI_ERR
|
||||||
depends on ACPI_APEI_GHES && (ARM64 || COMPILE_TEST)
|
depends on ACPI_APEI_GHES && (ARM64 || COMPILE_TEST)
|
||||||
bool "HiSilicon HIP PCIe controller error handling driver"
|
bool "HiSilicon HIP PCIe controller error handling driver"
|
||||||
|
@ -27,7 +27,7 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
|
|||||||
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
|
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
|
||||||
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
|
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
|
||||||
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
|
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
|
||||||
obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
|
obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
|
||||||
obj-$(CONFIG_VMD) += vmd.o
|
obj-$(CONFIG_VMD) += vmd.o
|
||||||
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
|
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
|
||||||
obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
|
obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
|
||||||
|
@ -64,6 +64,7 @@ enum j721e_pcie_mode {
|
|||||||
|
|
||||||
struct j721e_pcie_data {
|
struct j721e_pcie_data {
|
||||||
enum j721e_pcie_mode mode;
|
enum j721e_pcie_mode mode;
|
||||||
|
bool quirk_retrain_flag;
|
||||||
};
|
};
|
||||||
|
|
||||||
static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
|
static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
|
||||||
@ -280,6 +281,7 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
|
|||||||
|
|
||||||
static const struct j721e_pcie_data j721e_pcie_rc_data = {
|
static const struct j721e_pcie_data j721e_pcie_rc_data = {
|
||||||
.mode = PCI_MODE_RC,
|
.mode = PCI_MODE_RC,
|
||||||
|
.quirk_retrain_flag = true,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct j721e_pcie_data j721e_pcie_ep_data = {
|
static const struct j721e_pcie_data j721e_pcie_ep_data = {
|
||||||
@ -388,6 +390,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
bridge->ops = &cdns_ti_pcie_host_ops;
|
bridge->ops = &cdns_ti_pcie_host_ops;
|
||||||
rc = pci_host_bridge_priv(bridge);
|
rc = pci_host_bridge_priv(bridge);
|
||||||
|
rc->quirk_retrain_flag = data->quirk_retrain_flag;
|
||||||
|
|
||||||
cdns_pcie = &rc->pcie;
|
cdns_pcie = &rc->pcie;
|
||||||
cdns_pcie->dev = dev;
|
cdns_pcie->dev = dev;
|
||||||
|
@ -382,6 +382,57 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
|
||||||
|
phys_addr_t addr, u8 interrupt_num,
|
||||||
|
u32 entry_size, u32 *msi_data,
|
||||||
|
u32 *msi_addr_offset)
|
||||||
|
{
|
||||||
|
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
||||||
|
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
|
||||||
|
struct cdns_pcie *pcie = &ep->pcie;
|
||||||
|
u64 pci_addr, pci_addr_mask = 0xff;
|
||||||
|
u16 flags, mme, data, data_mask;
|
||||||
|
u8 msi_count;
|
||||||
|
int ret;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
/* Check whether the MSI feature has been enabled by the PCI host. */
|
||||||
|
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
|
||||||
|
if (!(flags & PCI_MSI_FLAGS_ENABLE))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* Get the number of enabled MSIs */
|
||||||
|
mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
|
||||||
|
msi_count = 1 << mme;
|
||||||
|
if (!interrupt_num || interrupt_num > msi_count)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* Compute the data value to be written. */
|
||||||
|
data_mask = msi_count - 1;
|
||||||
|
data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
|
||||||
|
data = data & ~data_mask;
|
||||||
|
|
||||||
|
/* Get the PCI address where to write the data into. */
|
||||||
|
pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
|
||||||
|
pci_addr <<= 32;
|
||||||
|
pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
|
||||||
|
pci_addr &= GENMASK_ULL(63, 2);
|
||||||
|
|
||||||
|
for (i = 0; i < interrupt_num; i++) {
|
||||||
|
ret = cdns_pcie_ep_map_addr(epc, fn, addr,
|
||||||
|
pci_addr & ~pci_addr_mask,
|
||||||
|
entry_size);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
addr = addr + entry_size;
|
||||||
|
}
|
||||||
|
|
||||||
|
*msi_data = data;
|
||||||
|
*msi_addr_offset = pci_addr & pci_addr_mask;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
|
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
|
||||||
u16 interrupt_num)
|
u16 interrupt_num)
|
||||||
{
|
{
|
||||||
@ -455,18 +506,13 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
|
|||||||
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
|
||||||
struct cdns_pcie *pcie = &ep->pcie;
|
struct cdns_pcie *pcie = &ep->pcie;
|
||||||
struct device *dev = pcie->dev;
|
struct device *dev = pcie->dev;
|
||||||
struct pci_epf *epf;
|
|
||||||
u32 cfg;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* BIT(0) is hardwired to 1, hence function 0 is always enabled
|
* BIT(0) is hardwired to 1, hence function 0 is always enabled
|
||||||
* and can't be disabled anyway.
|
* and can't be disabled anyway.
|
||||||
*/
|
*/
|
||||||
cfg = BIT(0);
|
cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
|
||||||
list_for_each_entry(epf, &epc->pci_epf, list)
|
|
||||||
cfg |= BIT(epf->func_no);
|
|
||||||
cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
|
|
||||||
|
|
||||||
ret = cdns_pcie_start_link(pcie);
|
ret = cdns_pcie_start_link(pcie);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
@ -481,6 +527,7 @@ static const struct pci_epc_features cdns_pcie_epc_features = {
|
|||||||
.linkup_notifier = false,
|
.linkup_notifier = false,
|
||||||
.msi_capable = true,
|
.msi_capable = true,
|
||||||
.msix_capable = true,
|
.msix_capable = true,
|
||||||
|
.align = 256,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct pci_epc_features*
|
static const struct pci_epc_features*
|
||||||
@ -500,6 +547,7 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = {
|
|||||||
.set_msix = cdns_pcie_ep_set_msix,
|
.set_msix = cdns_pcie_ep_set_msix,
|
||||||
.get_msix = cdns_pcie_ep_get_msix,
|
.get_msix = cdns_pcie_ep_get_msix,
|
||||||
.raise_irq = cdns_pcie_ep_raise_irq,
|
.raise_irq = cdns_pcie_ep_raise_irq,
|
||||||
|
.map_msi_irq = cdns_pcie_ep_map_msi_irq,
|
||||||
.start = cdns_pcie_ep_start,
|
.start = cdns_pcie_ep_start,
|
||||||
.get_features = cdns_pcie_ep_get_features,
|
.get_features = cdns_pcie_ep_get_features,
|
||||||
};
|
};
|
||||||
|
@ -77,6 +77,68 @@ static struct pci_ops cdns_pcie_host_ops = {
|
|||||||
.write = pci_generic_config_write,
|
.write = pci_generic_config_write,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
|
||||||
|
{
|
||||||
|
struct device *dev = pcie->dev;
|
||||||
|
int retries;
|
||||||
|
|
||||||
|
/* Check if the link is up or not */
|
||||||
|
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
|
||||||
|
if (cdns_pcie_link_up(pcie)) {
|
||||||
|
dev_info(dev, "Link up\n");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
|
||||||
|
}
|
||||||
|
|
||||||
|
return -ETIMEDOUT;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int cdns_pcie_retrain(struct cdns_pcie *pcie)
|
||||||
|
{
|
||||||
|
u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
|
||||||
|
u16 lnk_stat, lnk_ctl;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Set retrain bit if current speed is 2.5 GB/s,
|
||||||
|
* but the PCIe root port support is > 2.5 GB/s.
|
||||||
|
*/
|
||||||
|
|
||||||
|
lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
|
||||||
|
PCI_EXP_LNKCAP));
|
||||||
|
if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
|
||||||
|
if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
|
||||||
|
lnk_ctl = cdns_pcie_rp_readw(pcie,
|
||||||
|
pcie_cap_off + PCI_EXP_LNKCTL);
|
||||||
|
lnk_ctl |= PCI_EXP_LNKCTL_RL;
|
||||||
|
cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
|
||||||
|
lnk_ctl);
|
||||||
|
|
||||||
|
ret = cdns_pcie_host_wait_for_link(pcie);
|
||||||
|
}
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
|
||||||
|
{
|
||||||
|
struct cdns_pcie *pcie = &rc->pcie;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = cdns_pcie_host_wait_for_link(pcie);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Retrain link for Gen2 training defect
|
||||||
|
* if quirk flag is set.
|
||||||
|
*/
|
||||||
|
if (!ret && rc->quirk_retrain_flag)
|
||||||
|
ret = cdns_pcie_retrain(pcie);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
|
static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
|
||||||
{
|
{
|
||||||
@ -321,9 +383,10 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
|
|||||||
|
|
||||||
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
|
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
|
||||||
err = cdns_pcie_host_bar_config(rc, entry);
|
err = cdns_pcie_host_bar_config(rc, entry);
|
||||||
if (err)
|
if (err) {
|
||||||
dev_err(dev, "Fail to configure IB using dma-ranges\n");
|
dev_err(dev, "Fail to configure IB using dma-ranges\n");
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
@ -398,23 +461,6 @@ static int cdns_pcie_host_init(struct device *dev,
|
|||||||
return cdns_pcie_host_init_address_translation(rc);
|
return cdns_pcie_host_init_address_translation(rc);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
|
|
||||||
{
|
|
||||||
struct device *dev = pcie->dev;
|
|
||||||
int retries;
|
|
||||||
|
|
||||||
/* Check if the link is up or not */
|
|
||||||
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
|
|
||||||
if (cdns_pcie_link_up(pcie)) {
|
|
||||||
dev_info(dev, "Link up\n");
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
|
|
||||||
}
|
|
||||||
|
|
||||||
return -ETIMEDOUT;
|
|
||||||
}
|
|
||||||
|
|
||||||
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
|
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
|
||||||
{
|
{
|
||||||
struct device *dev = rc->pcie.dev;
|
struct device *dev = rc->pcie.dev;
|
||||||
@ -457,7 +503,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = cdns_pcie_host_wait_for_link(pcie);
|
ret = cdns_pcie_host_start_link(rc);
|
||||||
if (ret)
|
if (ret)
|
||||||
dev_dbg(dev, "PCIe link never came up\n");
|
dev_dbg(dev, "PCIe link never came up\n");
|
||||||
|
|
||||||
|
@ -119,7 +119,7 @@
|
|||||||
* Root Port Registers (PCI configuration space for the root port function)
|
* Root Port Registers (PCI configuration space for the root port function)
|
||||||
*/
|
*/
|
||||||
#define CDNS_PCIE_RP_BASE 0x00200000
|
#define CDNS_PCIE_RP_BASE 0x00200000
|
||||||
|
#define CDNS_PCIE_RP_CAP_OFFSET 0xc0
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Address Translation Registers
|
* Address Translation Registers
|
||||||
@ -291,6 +291,7 @@ struct cdns_pcie {
|
|||||||
* @device_id: PCI device ID
|
* @device_id: PCI device ID
|
||||||
* @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
|
* @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
|
||||||
* available
|
* available
|
||||||
|
* @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
|
||||||
*/
|
*/
|
||||||
struct cdns_pcie_rc {
|
struct cdns_pcie_rc {
|
||||||
struct cdns_pcie pcie;
|
struct cdns_pcie pcie;
|
||||||
@ -299,6 +300,7 @@ struct cdns_pcie_rc {
|
|||||||
u32 vendor_id;
|
u32 vendor_id;
|
||||||
u32 device_id;
|
u32 device_id;
|
||||||
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
|
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
|
||||||
|
bool quirk_retrain_flag;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -414,6 +416,13 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
|
|||||||
cdns_pcie_write_sz(addr, 0x2, value);
|
cdns_pcie_write_sz(addr, 0x2, value);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
|
||||||
|
{
|
||||||
|
void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
|
||||||
|
|
||||||
|
return cdns_pcie_read_sz(addr, 0x2);
|
||||||
|
}
|
||||||
|
|
||||||
/* Endpoint Function register access */
|
/* Endpoint Function register access */
|
||||||
static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
|
static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
|
||||||
u32 reg, u8 value)
|
u32 reg, u8 value)
|
||||||
|
@ -115,10 +115,17 @@ static const struct ls_pcie_ep_drvdata ls2_ep_drvdata = {
|
|||||||
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
|
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct ls_pcie_ep_drvdata lx2_ep_drvdata = {
|
||||||
|
.func_offset = 0x8000,
|
||||||
|
.ops = &ls_pcie_ep_ops,
|
||||||
|
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct of_device_id ls_pcie_ep_of_match[] = {
|
static const struct of_device_id ls_pcie_ep_of_match[] = {
|
||||||
{ .compatible = "fsl,ls1046a-pcie-ep", .data = &ls1_ep_drvdata },
|
{ .compatible = "fsl,ls1046a-pcie-ep", .data = &ls1_ep_drvdata },
|
||||||
{ .compatible = "fsl,ls1088a-pcie-ep", .data = &ls2_ep_drvdata },
|
{ .compatible = "fsl,ls1088a-pcie-ep", .data = &ls2_ep_drvdata },
|
||||||
{ .compatible = "fsl,ls2088a-pcie-ep", .data = &ls2_ep_drvdata },
|
{ .compatible = "fsl,ls2088a-pcie-ep", .data = &ls2_ep_drvdata },
|
||||||
|
{ .compatible = "fsl,lx2160ar2-pcie-ep", .data = &lx2_ep_drvdata },
|
||||||
{ },
|
{ },
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -232,7 +232,7 @@ static const struct of_device_id ls_pcie_of_match[] = {
|
|||||||
{ },
|
{ },
|
||||||
};
|
};
|
||||||
|
|
||||||
static int __init ls_pcie_probe(struct platform_device *pdev)
|
static int ls_pcie_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
struct dw_pcie *pci;
|
struct dw_pcie *pci;
|
||||||
@ -271,10 +271,11 @@ static int __init ls_pcie_probe(struct platform_device *pdev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static struct platform_driver ls_pcie_driver = {
|
static struct platform_driver ls_pcie_driver = {
|
||||||
|
.probe = ls_pcie_probe,
|
||||||
.driver = {
|
.driver = {
|
||||||
.name = "layerscape-pcie",
|
.name = "layerscape-pcie",
|
||||||
.of_match_table = ls_pcie_of_match,
|
.of_match_table = ls_pcie_of_match,
|
||||||
.suppress_bind_attrs = true,
|
.suppress_bind_attrs = true,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
builtin_platform_driver_probe(ls_pcie_driver, ls_pcie_probe);
|
builtin_platform_driver(ls_pcie_driver);
|
||||||
|
@ -314,9 +314,6 @@ static const struct dw_pcie_host_ops al_pcie_host_ops = {
|
|||||||
.host_init = al_pcie_host_init,
|
.host_init = al_pcie_host_init,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct dw_pcie_ops dw_pcie_ops = {
|
|
||||||
};
|
|
||||||
|
|
||||||
static int al_pcie_probe(struct platform_device *pdev)
|
static int al_pcie_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
@ -334,7 +331,6 @@ static int al_pcie_probe(struct platform_device *pdev)
|
|||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
pci->dev = dev;
|
pci->dev = dev;
|
||||||
pci->ops = &dw_pcie_ops;
|
|
||||||
pci->pp.ops = &al_pcie_host_ops;
|
pci->pp.ops = &al_pcie_host_ops;
|
||||||
|
|
||||||
al_pcie->pci = pci;
|
al_pcie->pci = pci;
|
||||||
|
@ -434,10 +434,8 @@ static void dw_pcie_ep_stop(struct pci_epc *epc)
|
|||||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||||
|
|
||||||
if (!pci->ops->stop_link)
|
if (pci->ops && pci->ops->stop_link)
|
||||||
return;
|
pci->ops->stop_link(pci);
|
||||||
|
|
||||||
pci->ops->stop_link(pci);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int dw_pcie_ep_start(struct pci_epc *epc)
|
static int dw_pcie_ep_start(struct pci_epc *epc)
|
||||||
@ -445,7 +443,7 @@ static int dw_pcie_ep_start(struct pci_epc *epc)
|
|||||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||||
|
|
||||||
if (!pci->ops->start_link)
|
if (!pci->ops || !pci->ops->start_link)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
return pci->ops->start_link(pci);
|
return pci->ops->start_link(pci);
|
||||||
|
@ -258,10 +258,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
|
|||||||
|
|
||||||
static void dw_pcie_free_msi(struct pcie_port *pp)
|
static void dw_pcie_free_msi(struct pcie_port *pp)
|
||||||
{
|
{
|
||||||
if (pp->msi_irq) {
|
if (pp->msi_irq)
|
||||||
irq_set_chained_handler(pp->msi_irq, NULL);
|
irq_set_chained_handler_and_data(pp->msi_irq, NULL, NULL);
|
||||||
irq_set_handler_data(pp->msi_irq, NULL);
|
|
||||||
}
|
|
||||||
|
|
||||||
irq_domain_remove(pp->msi_domain);
|
irq_domain_remove(pp->msi_domain);
|
||||||
irq_domain_remove(pp->irq_domain);
|
irq_domain_remove(pp->irq_domain);
|
||||||
@ -305,8 +303,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
|||||||
if (cfg_res) {
|
if (cfg_res) {
|
||||||
pp->cfg0_size = resource_size(cfg_res);
|
pp->cfg0_size = resource_size(cfg_res);
|
||||||
pp->cfg0_base = cfg_res->start;
|
pp->cfg0_base = cfg_res->start;
|
||||||
} else if (!pp->va_cfg0_base) {
|
|
||||||
|
pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, cfg_res);
|
||||||
|
if (IS_ERR(pp->va_cfg0_base))
|
||||||
|
return PTR_ERR(pp->va_cfg0_base);
|
||||||
|
} else {
|
||||||
dev_err(dev, "Missing *config* reg space\n");
|
dev_err(dev, "Missing *config* reg space\n");
|
||||||
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!pci->dbi_base) {
|
if (!pci->dbi_base) {
|
||||||
@ -322,38 +325,12 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
|||||||
|
|
||||||
pp->bridge = bridge;
|
pp->bridge = bridge;
|
||||||
|
|
||||||
/* Get the I/O and memory ranges from DT */
|
/* Get the I/O range from DT */
|
||||||
resource_list_for_each_entry(win, &bridge->windows) {
|
win = resource_list_first_type(&bridge->windows, IORESOURCE_IO);
|
||||||
switch (resource_type(win->res)) {
|
if (win) {
|
||||||
case IORESOURCE_IO:
|
pp->io_size = resource_size(win->res);
|
||||||
pp->io_size = resource_size(win->res);
|
pp->io_bus_addr = win->res->start - win->offset;
|
||||||
pp->io_bus_addr = win->res->start - win->offset;
|
pp->io_base = pci_pio_to_address(win->res->start);
|
||||||
pp->io_base = pci_pio_to_address(win->res->start);
|
|
||||||
break;
|
|
||||||
case 0:
|
|
||||||
dev_err(dev, "Missing *config* reg space\n");
|
|
||||||
pp->cfg0_size = resource_size(win->res);
|
|
||||||
pp->cfg0_base = win->res->start;
|
|
||||||
if (!pci->dbi_base) {
|
|
||||||
pci->dbi_base = devm_pci_remap_cfgspace(dev,
|
|
||||||
pp->cfg0_base,
|
|
||||||
pp->cfg0_size);
|
|
||||||
if (!pci->dbi_base) {
|
|
||||||
dev_err(dev, "Error with ioremap\n");
|
|
||||||
return -ENOMEM;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!pp->va_cfg0_base) {
|
|
||||||
pp->va_cfg0_base = devm_pci_remap_cfgspace(dev,
|
|
||||||
pp->cfg0_base, pp->cfg0_size);
|
|
||||||
if (!pp->va_cfg0_base) {
|
|
||||||
dev_err(dev, "Error with ioremap in function\n");
|
|
||||||
return -ENOMEM;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pci->link_gen < 1)
|
if (pci->link_gen < 1)
|
||||||
@ -425,7 +402,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
|
|||||||
dw_pcie_setup_rc(pp);
|
dw_pcie_setup_rc(pp);
|
||||||
dw_pcie_msi_init(pp);
|
dw_pcie_msi_init(pp);
|
||||||
|
|
||||||
if (!dw_pcie_link_up(pci) && pci->ops->start_link) {
|
if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) {
|
||||||
ret = pci->ops->start_link(pci);
|
ret = pci->ops->start_link(pci);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_free_msi;
|
goto err_free_msi;
|
||||||
|
@ -141,7 +141,7 @@ u32 dw_pcie_read_dbi(struct dw_pcie *pci, u32 reg, size_t size)
|
|||||||
int ret;
|
int ret;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
if (pci->ops->read_dbi)
|
if (pci->ops && pci->ops->read_dbi)
|
||||||
return pci->ops->read_dbi(pci, pci->dbi_base, reg, size);
|
return pci->ops->read_dbi(pci, pci->dbi_base, reg, size);
|
||||||
|
|
||||||
ret = dw_pcie_read(pci->dbi_base + reg, size, &val);
|
ret = dw_pcie_read(pci->dbi_base + reg, size, &val);
|
||||||
@ -156,7 +156,7 @@ void dw_pcie_write_dbi(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
|
|||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (pci->ops->write_dbi) {
|
if (pci->ops && pci->ops->write_dbi) {
|
||||||
pci->ops->write_dbi(pci, pci->dbi_base, reg, size, val);
|
pci->ops->write_dbi(pci, pci->dbi_base, reg, size, val);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -171,7 +171,7 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
|
|||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (pci->ops->write_dbi2) {
|
if (pci->ops && pci->ops->write_dbi2) {
|
||||||
pci->ops->write_dbi2(pci, pci->dbi_base2, reg, size, val);
|
pci->ops->write_dbi2(pci, pci->dbi_base2, reg, size, val);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -186,7 +186,7 @@ static u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 reg)
|
|||||||
int ret;
|
int ret;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
if (pci->ops->read_dbi)
|
if (pci->ops && pci->ops->read_dbi)
|
||||||
return pci->ops->read_dbi(pci, pci->atu_base, reg, 4);
|
return pci->ops->read_dbi(pci, pci->atu_base, reg, 4);
|
||||||
|
|
||||||
ret = dw_pcie_read(pci->atu_base + reg, 4, &val);
|
ret = dw_pcie_read(pci->atu_base + reg, 4, &val);
|
||||||
@ -200,7 +200,7 @@ static void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val)
|
|||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (pci->ops->write_dbi) {
|
if (pci->ops && pci->ops->write_dbi) {
|
||||||
pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val);
|
pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -225,6 +225,47 @@ static void dw_pcie_writel_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg,
|
|||||||
dw_pcie_writel_atu(pci, offset + reg, val);
|
dw_pcie_writel_atu(pci, offset + reg, val);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline u32 dw_pcie_enable_ecrc(u32 val)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* DesignWare core version 4.90A has a design issue where the 'TD'
|
||||||
|
* bit in the Control register-1 of the ATU outbound region acts
|
||||||
|
* like an override for the ECRC setting, i.e., the presence of TLP
|
||||||
|
* Digest (ECRC) in the outgoing TLPs is solely determined by this
|
||||||
|
* bit. This is contrary to the PCIe spec which says that the
|
||||||
|
* enablement of the ECRC is solely determined by the AER
|
||||||
|
* registers.
|
||||||
|
*
|
||||||
|
* Because of this, even when the ECRC is enabled through AER
|
||||||
|
* registers, the transactions going through ATU won't have TLP
|
||||||
|
* Digest as there is no way the PCI core AER code could program
|
||||||
|
* the TD bit which is specific to the DesignWare core.
|
||||||
|
*
|
||||||
|
* The best way to handle this scenario is to program the TD bit
|
||||||
|
* always. It affects only the traffic from root port to downstream
|
||||||
|
* devices.
|
||||||
|
*
|
||||||
|
* At this point,
|
||||||
|
* When ECRC is enabled in AER registers, everything works normally
|
||||||
|
* When ECRC is NOT enabled in AER registers, then,
|
||||||
|
* on Root Port:- TLP Digest (DWord size) gets appended to each packet
|
||||||
|
* even through it is not required. Since downstream
|
||||||
|
* TLPs are mostly for configuration accesses and BAR
|
||||||
|
* accesses, they are not in critical path and won't
|
||||||
|
* have much negative effect on the performance.
|
||||||
|
* on End Point:- TLP Digest is received for some/all the packets coming
|
||||||
|
* from the root port. TLP Digest is ignored because,
|
||||||
|
* as per the PCIe Spec r5.0 v1.0 section 2.2.3
|
||||||
|
* "TLP Digest Rules", when an endpoint receives TLP
|
||||||
|
* Digest when its ECRC check functionality is disabled
|
||||||
|
* in AER registers, received TLP Digest is just ignored.
|
||||||
|
* Since there is no issue or error reported either side, best way to
|
||||||
|
* handle the scenario is to program TD bit by default.
|
||||||
|
*/
|
||||||
|
|
||||||
|
return val | PCIE_ATU_TD;
|
||||||
|
}
|
||||||
|
|
||||||
static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
|
static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
|
||||||
int index, int type,
|
int index, int type,
|
||||||
u64 cpu_addr, u64 pci_addr,
|
u64 cpu_addr, u64 pci_addr,
|
||||||
@ -248,6 +289,8 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
|
|||||||
val = type | PCIE_ATU_FUNC_NUM(func_no);
|
val = type | PCIE_ATU_FUNC_NUM(func_no);
|
||||||
val = upper_32_bits(size - 1) ?
|
val = upper_32_bits(size - 1) ?
|
||||||
val | PCIE_ATU_INCREASE_REGION_SIZE : val;
|
val | PCIE_ATU_INCREASE_REGION_SIZE : val;
|
||||||
|
if (pci->version == 0x490A)
|
||||||
|
val = dw_pcie_enable_ecrc(val);
|
||||||
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, val);
|
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, val);
|
||||||
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
|
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
|
||||||
PCIE_ATU_ENABLE);
|
PCIE_ATU_ENABLE);
|
||||||
@ -273,7 +316,7 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
|
|||||||
{
|
{
|
||||||
u32 retries, val;
|
u32 retries, val;
|
||||||
|
|
||||||
if (pci->ops->cpu_addr_fixup)
|
if (pci->ops && pci->ops->cpu_addr_fixup)
|
||||||
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
|
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
|
||||||
|
|
||||||
if (pci->iatu_unroll_enabled) {
|
if (pci->iatu_unroll_enabled) {
|
||||||
@ -290,12 +333,19 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
|
|||||||
upper_32_bits(cpu_addr));
|
upper_32_bits(cpu_addr));
|
||||||
dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT,
|
dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT,
|
||||||
lower_32_bits(cpu_addr + size - 1));
|
lower_32_bits(cpu_addr + size - 1));
|
||||||
|
if (pci->version >= 0x460A)
|
||||||
|
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_LIMIT,
|
||||||
|
upper_32_bits(cpu_addr + size - 1));
|
||||||
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET,
|
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET,
|
||||||
lower_32_bits(pci_addr));
|
lower_32_bits(pci_addr));
|
||||||
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET,
|
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET,
|
||||||
upper_32_bits(pci_addr));
|
upper_32_bits(pci_addr));
|
||||||
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type |
|
val = type | PCIE_ATU_FUNC_NUM(func_no);
|
||||||
PCIE_ATU_FUNC_NUM(func_no));
|
val = ((upper_32_bits(size - 1)) && (pci->version >= 0x460A)) ?
|
||||||
|
val | PCIE_ATU_INCREASE_REGION_SIZE : val;
|
||||||
|
if (pci->version == 0x490A)
|
||||||
|
val = dw_pcie_enable_ecrc(val);
|
||||||
|
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, val);
|
||||||
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
|
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -321,7 +371,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
|
|||||||
|
|
||||||
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||||
int type, u64 cpu_addr, u64 pci_addr,
|
int type, u64 cpu_addr, u64 pci_addr,
|
||||||
u32 size)
|
u64 size)
|
||||||
{
|
{
|
||||||
__dw_pcie_prog_outbound_atu(pci, func_no, index, type,
|
__dw_pcie_prog_outbound_atu(pci, func_no, index, type,
|
||||||
cpu_addr, pci_addr, size);
|
cpu_addr, pci_addr, size);
|
||||||
@ -481,7 +531,7 @@ int dw_pcie_link_up(struct dw_pcie *pci)
|
|||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
if (pci->ops->link_up)
|
if (pci->ops && pci->ops->link_up)
|
||||||
return pci->ops->link_up(pci);
|
return pci->ops->link_up(pci);
|
||||||
|
|
||||||
val = readl(pci->dbi_base + PCIE_PORT_DEBUG1);
|
val = readl(pci->dbi_base + PCIE_PORT_DEBUG1);
|
||||||
|
@ -86,6 +86,7 @@
|
|||||||
#define PCIE_ATU_TYPE_IO 0x2
|
#define PCIE_ATU_TYPE_IO 0x2
|
||||||
#define PCIE_ATU_TYPE_CFG0 0x4
|
#define PCIE_ATU_TYPE_CFG0 0x4
|
||||||
#define PCIE_ATU_TYPE_CFG1 0x5
|
#define PCIE_ATU_TYPE_CFG1 0x5
|
||||||
|
#define PCIE_ATU_TD BIT(8)
|
||||||
#define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20)
|
#define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20)
|
||||||
#define PCIE_ATU_CR2 0x908
|
#define PCIE_ATU_CR2 0x908
|
||||||
#define PCIE_ATU_ENABLE BIT(31)
|
#define PCIE_ATU_ENABLE BIT(31)
|
||||||
@ -99,6 +100,7 @@
|
|||||||
#define PCIE_ATU_DEV(x) FIELD_PREP(GENMASK(23, 19), x)
|
#define PCIE_ATU_DEV(x) FIELD_PREP(GENMASK(23, 19), x)
|
||||||
#define PCIE_ATU_FUNC(x) FIELD_PREP(GENMASK(18, 16), x)
|
#define PCIE_ATU_FUNC(x) FIELD_PREP(GENMASK(18, 16), x)
|
||||||
#define PCIE_ATU_UPPER_TARGET 0x91C
|
#define PCIE_ATU_UPPER_TARGET 0x91C
|
||||||
|
#define PCIE_ATU_UPPER_LIMIT 0x924
|
||||||
|
|
||||||
#define PCIE_MISC_CONTROL_1_OFF 0x8BC
|
#define PCIE_MISC_CONTROL_1_OFF 0x8BC
|
||||||
#define PCIE_DBI_RO_WR_EN BIT(0)
|
#define PCIE_DBI_RO_WR_EN BIT(0)
|
||||||
@ -297,7 +299,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index,
|
|||||||
u64 size);
|
u64 size);
|
||||||
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||||
int type, u64 cpu_addr, u64 pci_addr,
|
int type, u64 cpu_addr, u64 pci_addr,
|
||||||
u32 size);
|
u64 size);
|
||||||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||||
int bar, u64 cpu_addr,
|
int bar, u64 cpu_addr,
|
||||||
enum dw_pcie_as_type as_type);
|
enum dw_pcie_as_type as_type);
|
||||||
|
@ -159,8 +159,10 @@ struct qcom_pcie_resources_2_3_3 {
|
|||||||
struct reset_control *rst[7];
|
struct reset_control *rst[7];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* 6 clocks typically, 7 for sm8250 */
|
||||||
struct qcom_pcie_resources_2_7_0 {
|
struct qcom_pcie_resources_2_7_0 {
|
||||||
struct clk_bulk_data clks[6];
|
struct clk_bulk_data clks[7];
|
||||||
|
int num_clks;
|
||||||
struct regulator_bulk_data supplies[2];
|
struct regulator_bulk_data supplies[2];
|
||||||
struct reset_control *pci_reset;
|
struct reset_control *pci_reset;
|
||||||
struct clk *pipe_clk;
|
struct clk *pipe_clk;
|
||||||
@ -398,7 +400,9 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
|
|||||||
|
|
||||||
/* enable external reference clock */
|
/* enable external reference clock */
|
||||||
val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
|
val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
|
||||||
val &= ~PHY_REFCLK_USE_PAD;
|
/* USE_PAD is required only for ipq806x */
|
||||||
|
if (!of_device_is_compatible(node, "qcom,pcie-apq8064"))
|
||||||
|
val &= ~PHY_REFCLK_USE_PAD;
|
||||||
val |= PHY_REFCLK_SSP_EN;
|
val |= PHY_REFCLK_SSP_EN;
|
||||||
writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
|
writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
|
||||||
|
|
||||||
@ -1152,8 +1156,14 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
|
|||||||
res->clks[3].id = "bus_slave";
|
res->clks[3].id = "bus_slave";
|
||||||
res->clks[4].id = "slave_q2a";
|
res->clks[4].id = "slave_q2a";
|
||||||
res->clks[5].id = "tbu";
|
res->clks[5].id = "tbu";
|
||||||
|
if (of_device_is_compatible(dev->of_node, "qcom,pcie-sm8250")) {
|
||||||
|
res->clks[6].id = "ddrss_sf_tbu";
|
||||||
|
res->num_clks = 7;
|
||||||
|
} else {
|
||||||
|
res->num_clks = 6;
|
||||||
|
}
|
||||||
|
|
||||||
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
|
ret = devm_clk_bulk_get(dev, res->num_clks, res->clks);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@ -1175,7 +1185,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
|
ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto err_disable_regulators;
|
goto err_disable_regulators;
|
||||||
|
|
||||||
@ -1227,7 +1237,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
err_disable_clocks:
|
err_disable_clocks:
|
||||||
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
|
clk_bulk_disable_unprepare(res->num_clks, res->clks);
|
||||||
err_disable_regulators:
|
err_disable_regulators:
|
||||||
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
||||||
|
|
||||||
@ -1238,7 +1248,7 @@ static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
|
|||||||
{
|
{
|
||||||
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
|
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
|
||||||
|
|
||||||
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
|
clk_bulk_disable_unprepare(res->num_clks, res->clks);
|
||||||
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -64,6 +64,8 @@ int pci_host_common_probe(struct platform_device *pdev)
|
|||||||
if (!bridge)
|
if (!bridge)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
platform_set_drvdata(pdev, bridge);
|
||||||
|
|
||||||
of_pci_check_probe_only();
|
of_pci_check_probe_only();
|
||||||
|
|
||||||
/* Parse and map our Configuration Space windows */
|
/* Parse and map our Configuration Space windows */
|
||||||
@ -78,8 +80,6 @@ int pci_host_common_probe(struct platform_device *pdev)
|
|||||||
bridge->sysdata = cfg;
|
bridge->sysdata = cfg;
|
||||||
bridge->ops = (struct pci_ops *)&ops->pci_ops;
|
bridge->ops = (struct pci_ops *)&ops->pci_ops;
|
||||||
|
|
||||||
platform_set_drvdata(pdev, bridge);
|
|
||||||
|
|
||||||
return pci_host_probe(bridge);
|
return pci_host_probe(bridge);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pci_host_common_probe);
|
EXPORT_SYMBOL_GPL(pci_host_common_probe);
|
||||||
|
@ -1714,7 +1714,7 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
|
|||||||
* resumed and suspended again: see hibernation_snapshot() and
|
* resumed and suspended again: see hibernation_snapshot() and
|
||||||
* hibernation_platform_enter().
|
* hibernation_platform_enter().
|
||||||
*
|
*
|
||||||
* If the memory enable bit is already set, Hyper-V sliently ignores
|
* If the memory enable bit is already set, Hyper-V silently ignores
|
||||||
* the below BAR updates, and the related PCI device driver can not
|
* the below BAR updates, and the related PCI device driver can not
|
||||||
* work, because reading from the device register(s) always returns
|
* work, because reading from the device register(s) always returns
|
||||||
* 0xFFFFFFFF.
|
* 0xFFFFFFFF.
|
||||||
|
@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
|
|||||||
if (!msi_group->gic_irq)
|
if (!msi_group->gic_irq)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
irq_set_chained_handler(msi_group->gic_irq,
|
irq_set_chained_handler_and_data(msi_group->gic_irq,
|
||||||
xgene_msi_isr);
|
xgene_msi_isr, msi_group);
|
||||||
err = irq_set_handler_data(msi_group->gic_irq, msi_group);
|
|
||||||
if (err) {
|
|
||||||
pr_err("failed to register GIC IRQ handler\n");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
/*
|
/*
|
||||||
* Statically allocate MSI GIC IRQs to each CPU core.
|
* Statically allocate MSI GIC IRQs to each CPU core.
|
||||||
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
|
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
|
||||||
|
@ -173,12 +173,13 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* The v1 controller has a bug in its Configuration Request
|
* The v1 controller has a bug in its Configuration Request
|
||||||
* Retry Status (CRS) logic: when CRS is enabled and we read the
|
* Retry Status (CRS) logic: when CRS Software Visibility is
|
||||||
* Vendor and Device ID of a non-existent device, the controller
|
* enabled and we read the Vendor and Device ID of a non-existent
|
||||||
* fabricates return data of 0xFFFF0001 ("device exists but is not
|
* device, the controller fabricates return data of 0xFFFF0001
|
||||||
* ready") instead of 0xFFFFFFFF ("device does not exist"). This
|
* ("device exists but is not ready") instead of 0xFFFFFFFF
|
||||||
* causes the PCI core to retry the read until it times out.
|
* ("device does not exist"). This causes the PCI core to retry
|
||||||
* Avoid this by not claiming to support CRS.
|
* the read until it times out. Avoid this by not claiming to
|
||||||
|
* support CRS SV.
|
||||||
*/
|
*/
|
||||||
if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) &&
|
if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) &&
|
||||||
((where & ~0x3) == XGENE_V1_PCI_EXP_CAP + PCI_EXP_RTCTL))
|
((where & ~0x3) == XGENE_V1_PCI_EXP_CAP + PCI_EXP_RTCTL))
|
||||||
|
@ -204,8 +204,7 @@ static int altera_msi_remove(struct platform_device *pdev)
|
|||||||
struct altera_msi *msi = platform_get_drvdata(pdev);
|
struct altera_msi *msi = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
msi_writel(msi, 0, MSI_INTMASK);
|
msi_writel(msi, 0, MSI_INTMASK);
|
||||||
irq_set_chained_handler(msi->irq, NULL);
|
irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
|
||||||
irq_set_handler_data(msi->irq, NULL);
|
|
||||||
|
|
||||||
altera_free_domains(msi);
|
altera_free_domains(msi);
|
||||||
|
|
||||||
|
@ -97,6 +97,7 @@
|
|||||||
|
|
||||||
#define PCIE_MISC_REVISION 0x406c
|
#define PCIE_MISC_REVISION 0x406c
|
||||||
#define BRCM_PCIE_HW_REV_33 0x0303
|
#define BRCM_PCIE_HW_REV_33 0x0303
|
||||||
|
#define BRCM_PCIE_HW_REV_3_20 0x0320
|
||||||
|
|
||||||
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT 0x4070
|
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT 0x4070
|
||||||
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT_LIMIT_MASK 0xfff00000
|
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT_LIMIT_MASK 0xfff00000
|
||||||
@ -187,6 +188,7 @@
|
|||||||
struct brcm_pcie;
|
struct brcm_pcie;
|
||||||
static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val);
|
static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val);
|
||||||
static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val);
|
static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val);
|
||||||
|
static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val);
|
||||||
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val);
|
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val);
|
||||||
static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val);
|
static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val);
|
||||||
|
|
||||||
@ -203,6 +205,7 @@ enum {
|
|||||||
|
|
||||||
enum pcie_type {
|
enum pcie_type {
|
||||||
GENERIC,
|
GENERIC,
|
||||||
|
BCM4908,
|
||||||
BCM7278,
|
BCM7278,
|
||||||
BCM2711,
|
BCM2711,
|
||||||
};
|
};
|
||||||
@ -227,6 +230,13 @@ static const struct pcie_cfg_data generic_cfg = {
|
|||||||
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
|
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct pcie_cfg_data bcm4908_cfg = {
|
||||||
|
.offsets = pcie_offsets,
|
||||||
|
.type = BCM4908,
|
||||||
|
.perst_set = brcm_pcie_perst_set_4908,
|
||||||
|
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
|
||||||
|
};
|
||||||
|
|
||||||
static const int pcie_offset_bcm7278[] = {
|
static const int pcie_offset_bcm7278[] = {
|
||||||
[RGR1_SW_INIT_1] = 0xc010,
|
[RGR1_SW_INIT_1] = 0xc010,
|
||||||
[EXT_CFG_INDEX] = 0x9000,
|
[EXT_CFG_INDEX] = 0x9000,
|
||||||
@ -279,6 +289,7 @@ struct brcm_pcie {
|
|||||||
const int *reg_offsets;
|
const int *reg_offsets;
|
||||||
enum pcie_type type;
|
enum pcie_type type;
|
||||||
struct reset_control *rescal;
|
struct reset_control *rescal;
|
||||||
|
struct reset_control *perst_reset;
|
||||||
int num_memc;
|
int num_memc;
|
||||||
u64 memc_size[PCIE_BRCM_MAX_MEMC];
|
u64 memc_size[PCIE_BRCM_MAX_MEMC];
|
||||||
u32 hw_rev;
|
u32 hw_rev;
|
||||||
@ -603,8 +614,7 @@ static void brcm_msi_remove(struct brcm_pcie *pcie)
|
|||||||
|
|
||||||
if (!msi)
|
if (!msi)
|
||||||
return;
|
return;
|
||||||
irq_set_chained_handler(msi->irq, NULL);
|
irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
|
||||||
irq_set_handler_data(msi->irq, NULL);
|
|
||||||
brcm_free_domains(msi);
|
brcm_free_domains(msi);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -735,6 +745,17 @@ static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32
|
|||||||
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
|
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val)
|
||||||
|
{
|
||||||
|
if (WARN_ONCE(!pcie->perst_reset, "missing PERST# reset controller\n"))
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (val)
|
||||||
|
reset_control_assert(pcie->perst_reset);
|
||||||
|
else
|
||||||
|
reset_control_deassert(pcie->perst_reset);
|
||||||
|
}
|
||||||
|
|
||||||
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
|
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
|
||||||
{
|
{
|
||||||
u32 tmp;
|
u32 tmp;
|
||||||
@ -1194,6 +1215,7 @@ static int brcm_pcie_remove(struct platform_device *pdev)
|
|||||||
|
|
||||||
static const struct of_device_id brcm_pcie_match[] = {
|
static const struct of_device_id brcm_pcie_match[] = {
|
||||||
{ .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg },
|
{ .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg },
|
||||||
|
{ .compatible = "brcm,bcm4908-pcie", .data = &bcm4908_cfg },
|
||||||
{ .compatible = "brcm,bcm7211-pcie", .data = &generic_cfg },
|
{ .compatible = "brcm,bcm7211-pcie", .data = &generic_cfg },
|
||||||
{ .compatible = "brcm,bcm7278-pcie", .data = &bcm7278_cfg },
|
{ .compatible = "brcm,bcm7278-pcie", .data = &bcm7278_cfg },
|
||||||
{ .compatible = "brcm,bcm7216-pcie", .data = &bcm7278_cfg },
|
{ .compatible = "brcm,bcm7216-pcie", .data = &bcm7278_cfg },
|
||||||
@ -1250,6 +1272,11 @@ static int brcm_pcie_probe(struct platform_device *pdev)
|
|||||||
clk_disable_unprepare(pcie->clk);
|
clk_disable_unprepare(pcie->clk);
|
||||||
return PTR_ERR(pcie->rescal);
|
return PTR_ERR(pcie->rescal);
|
||||||
}
|
}
|
||||||
|
pcie->perst_reset = devm_reset_control_get_optional_exclusive(&pdev->dev, "perst");
|
||||||
|
if (IS_ERR(pcie->perst_reset)) {
|
||||||
|
clk_disable_unprepare(pcie->clk);
|
||||||
|
return PTR_ERR(pcie->perst_reset);
|
||||||
|
}
|
||||||
|
|
||||||
ret = reset_control_deassert(pcie->rescal);
|
ret = reset_control_deassert(pcie->rescal);
|
||||||
if (ret)
|
if (ret)
|
||||||
@ -1267,6 +1294,10 @@ static int brcm_pcie_probe(struct platform_device *pdev)
|
|||||||
goto fail;
|
goto fail;
|
||||||
|
|
||||||
pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION);
|
pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION);
|
||||||
|
if (pcie->type == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) {
|
||||||
|
dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n");
|
||||||
|
goto fail;
|
||||||
|
}
|
||||||
|
|
||||||
msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
|
msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
|
||||||
if (pci_msi_enabled() && msi_np == pcie->np) {
|
if (pci_msi_enabled() && msi_np == pcie->np) {
|
||||||
|
@ -1035,14 +1035,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
|||||||
err = of_pci_get_devfn(child);
|
err = of_pci_get_devfn(child);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
dev_err(dev, "failed to parse devfn: %d\n", err);
|
dev_err(dev, "failed to parse devfn: %d\n", err);
|
||||||
return err;
|
goto error_put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
slot = PCI_SLOT(err);
|
slot = PCI_SLOT(err);
|
||||||
|
|
||||||
err = mtk_pcie_parse_port(pcie, child, slot);
|
err = mtk_pcie_parse_port(pcie, child, slot);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
goto error_put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = mtk_pcie_subsys_powerup(pcie);
|
err = mtk_pcie_subsys_powerup(pcie);
|
||||||
@ -1058,6 +1058,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
|||||||
mtk_pcie_subsys_powerdown(pcie);
|
mtk_pcie_subsys_powerdown(pcie);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
error_put_node:
|
||||||
|
of_node_put(child);
|
||||||
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mtk_pcie_probe(struct platform_device *pdev)
|
static int mtk_pcie_probe(struct platform_device *pdev)
|
||||||
|
1138
drivers/pci/controller/pcie-microchip-host.c
Normal file
1138
drivers/pci/controller/pcie-microchip-host.c
Normal file
File diff suppressed because it is too large
Load Diff
@ -735,7 +735,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* setup MSI data target */
|
/* setup MSI data target */
|
||||||
msi->pages = __get_free_pages(GFP_KERNEL, 0);
|
msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0);
|
||||||
rcar_pcie_hw_enable_msi(host);
|
rcar_pcie_hw_enable_msi(host);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -82,7 +82,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
|||||||
}
|
}
|
||||||
|
|
||||||
rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev,
|
rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev,
|
||||||
"mgmt-sticky");
|
"mgmt-sticky");
|
||||||
if (IS_ERR(rockchip->mgmt_sticky_rst)) {
|
if (IS_ERR(rockchip->mgmt_sticky_rst)) {
|
||||||
if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER)
|
if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER)
|
||||||
dev_err(dev, "missing mgmt-sticky reset property in node\n");
|
dev_err(dev, "missing mgmt-sticky reset property in node\n");
|
||||||
@ -118,11 +118,11 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (rockchip->is_rc) {
|
if (rockchip->is_rc) {
|
||||||
rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH);
|
rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
|
||||||
if (IS_ERR(rockchip->ep_gpio)) {
|
GPIOD_OUT_HIGH);
|
||||||
dev_err(dev, "missing ep-gpios property in node\n");
|
if (IS_ERR(rockchip->ep_gpio))
|
||||||
return PTR_ERR(rockchip->ep_gpio);
|
return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
|
||||||
}
|
"failed to get ep GPIO\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
|
rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
|
||||||
|
@ -1,341 +0,0 @@
|
|||||||
// SPDX-License-Identifier: GPL-2.0
|
|
||||||
#include <linux/irqchip/chained_irq.h>
|
|
||||||
#include <linux/irqdomain.h>
|
|
||||||
#include <linux/pci-ecam.h>
|
|
||||||
#include <linux/delay.h>
|
|
||||||
#include <linux/msi.h>
|
|
||||||
#include <linux/of_address.h>
|
|
||||||
|
|
||||||
#define MSI_MAX 256
|
|
||||||
|
|
||||||
#define SMP8759_MUX 0x48
|
|
||||||
#define SMP8759_TEST_OUT 0x74
|
|
||||||
#define SMP8759_DOORBELL 0x7c
|
|
||||||
#define SMP8759_STATUS 0x80
|
|
||||||
#define SMP8759_ENABLE 0xa0
|
|
||||||
|
|
||||||
struct tango_pcie {
|
|
||||||
DECLARE_BITMAP(used_msi, MSI_MAX);
|
|
||||||
u64 msi_doorbell;
|
|
||||||
spinlock_t used_msi_lock;
|
|
||||||
void __iomem *base;
|
|
||||||
struct irq_domain *dom;
|
|
||||||
};
|
|
||||||
|
|
||||||
static void tango_msi_isr(struct irq_desc *desc)
|
|
||||||
{
|
|
||||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
|
||||||
struct tango_pcie *pcie = irq_desc_get_handler_data(desc);
|
|
||||||
unsigned long status, base, virq, idx, pos = 0;
|
|
||||||
|
|
||||||
chained_irq_enter(chip, desc);
|
|
||||||
spin_lock(&pcie->used_msi_lock);
|
|
||||||
|
|
||||||
while ((pos = find_next_bit(pcie->used_msi, MSI_MAX, pos)) < MSI_MAX) {
|
|
||||||
base = round_down(pos, 32);
|
|
||||||
status = readl_relaxed(pcie->base + SMP8759_STATUS + base / 8);
|
|
||||||
for_each_set_bit(idx, &status, 32) {
|
|
||||||
virq = irq_find_mapping(pcie->dom, base + idx);
|
|
||||||
generic_handle_irq(virq);
|
|
||||||
}
|
|
||||||
pos = base + 32;
|
|
||||||
}
|
|
||||||
|
|
||||||
spin_unlock(&pcie->used_msi_lock);
|
|
||||||
chained_irq_exit(chip, desc);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tango_ack(struct irq_data *d)
|
|
||||||
{
|
|
||||||
struct tango_pcie *pcie = d->chip_data;
|
|
||||||
u32 offset = (d->hwirq / 32) * 4;
|
|
||||||
u32 bit = BIT(d->hwirq % 32);
|
|
||||||
|
|
||||||
writel_relaxed(bit, pcie->base + SMP8759_STATUS + offset);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void update_msi_enable(struct irq_data *d, bool unmask)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
struct tango_pcie *pcie = d->chip_data;
|
|
||||||
u32 offset = (d->hwirq / 32) * 4;
|
|
||||||
u32 bit = BIT(d->hwirq % 32);
|
|
||||||
u32 val;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&pcie->used_msi_lock, flags);
|
|
||||||
val = readl_relaxed(pcie->base + SMP8759_ENABLE + offset);
|
|
||||||
val = unmask ? val | bit : val & ~bit;
|
|
||||||
writel_relaxed(val, pcie->base + SMP8759_ENABLE + offset);
|
|
||||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tango_mask(struct irq_data *d)
|
|
||||||
{
|
|
||||||
update_msi_enable(d, false);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tango_unmask(struct irq_data *d)
|
|
||||||
{
|
|
||||||
update_msi_enable(d, true);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int tango_set_affinity(struct irq_data *d, const struct cpumask *mask,
|
|
||||||
bool force)
|
|
||||||
{
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tango_compose_msi_msg(struct irq_data *d, struct msi_msg *msg)
|
|
||||||
{
|
|
||||||
struct tango_pcie *pcie = d->chip_data;
|
|
||||||
msg->address_lo = lower_32_bits(pcie->msi_doorbell);
|
|
||||||
msg->address_hi = upper_32_bits(pcie->msi_doorbell);
|
|
||||||
msg->data = d->hwirq;
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct irq_chip tango_chip = {
|
|
||||||
.irq_ack = tango_ack,
|
|
||||||
.irq_mask = tango_mask,
|
|
||||||
.irq_unmask = tango_unmask,
|
|
||||||
.irq_set_affinity = tango_set_affinity,
|
|
||||||
.irq_compose_msi_msg = tango_compose_msi_msg,
|
|
||||||
};
|
|
||||||
|
|
||||||
static void msi_ack(struct irq_data *d)
|
|
||||||
{
|
|
||||||
irq_chip_ack_parent(d);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void msi_mask(struct irq_data *d)
|
|
||||||
{
|
|
||||||
pci_msi_mask_irq(d);
|
|
||||||
irq_chip_mask_parent(d);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void msi_unmask(struct irq_data *d)
|
|
||||||
{
|
|
||||||
pci_msi_unmask_irq(d);
|
|
||||||
irq_chip_unmask_parent(d);
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct irq_chip msi_chip = {
|
|
||||||
.name = "MSI",
|
|
||||||
.irq_ack = msi_ack,
|
|
||||||
.irq_mask = msi_mask,
|
|
||||||
.irq_unmask = msi_unmask,
|
|
||||||
};
|
|
||||||
|
|
||||||
static struct msi_domain_info msi_dom_info = {
|
|
||||||
.flags = MSI_FLAG_PCI_MSIX
|
|
||||||
| MSI_FLAG_USE_DEF_DOM_OPS
|
|
||||||
| MSI_FLAG_USE_DEF_CHIP_OPS,
|
|
||||||
.chip = &msi_chip,
|
|
||||||
};
|
|
||||||
|
|
||||||
static int tango_irq_domain_alloc(struct irq_domain *dom, unsigned int virq,
|
|
||||||
unsigned int nr_irqs, void *args)
|
|
||||||
{
|
|
||||||
struct tango_pcie *pcie = dom->host_data;
|
|
||||||
unsigned long flags;
|
|
||||||
int pos;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&pcie->used_msi_lock, flags);
|
|
||||||
pos = find_first_zero_bit(pcie->used_msi, MSI_MAX);
|
|
||||||
if (pos >= MSI_MAX) {
|
|
||||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
|
||||||
return -ENOSPC;
|
|
||||||
}
|
|
||||||
__set_bit(pos, pcie->used_msi);
|
|
||||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
|
||||||
irq_domain_set_info(dom, virq, pos, &tango_chip,
|
|
||||||
pcie, handle_edge_irq, NULL, NULL);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tango_irq_domain_free(struct irq_domain *dom, unsigned int virq,
|
|
||||||
unsigned int nr_irqs)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
struct irq_data *d = irq_domain_get_irq_data(dom, virq);
|
|
||||||
struct tango_pcie *pcie = d->chip_data;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&pcie->used_msi_lock, flags);
|
|
||||||
__clear_bit(d->hwirq, pcie->used_msi);
|
|
||||||
spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct irq_domain_ops dom_ops = {
|
|
||||||
.alloc = tango_irq_domain_alloc,
|
|
||||||
.free = tango_irq_domain_free,
|
|
||||||
};
|
|
||||||
|
|
||||||
static int smp8759_config_read(struct pci_bus *bus, unsigned int devfn,
|
|
||||||
int where, int size, u32 *val)
|
|
||||||
{
|
|
||||||
struct pci_config_window *cfg = bus->sysdata;
|
|
||||||
struct tango_pcie *pcie = dev_get_drvdata(cfg->parent);
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
/* Reads in configuration space outside devfn 0 return garbage */
|
|
||||||
if (devfn != 0)
|
|
||||||
return PCIBIOS_FUNC_NOT_SUPPORTED;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* PCI config and MMIO accesses are muxed. Linux doesn't have a
|
|
||||||
* mutual exclusion mechanism for config vs. MMIO accesses, so
|
|
||||||
* concurrent accesses may cause corruption.
|
|
||||||
*/
|
|
||||||
writel_relaxed(1, pcie->base + SMP8759_MUX);
|
|
||||||
ret = pci_generic_config_read(bus, devfn, where, size, val);
|
|
||||||
writel_relaxed(0, pcie->base + SMP8759_MUX);
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int smp8759_config_write(struct pci_bus *bus, unsigned int devfn,
|
|
||||||
int where, int size, u32 val)
|
|
||||||
{
|
|
||||||
struct pci_config_window *cfg = bus->sysdata;
|
|
||||||
struct tango_pcie *pcie = dev_get_drvdata(cfg->parent);
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
writel_relaxed(1, pcie->base + SMP8759_MUX);
|
|
||||||
ret = pci_generic_config_write(bus, devfn, where, size, val);
|
|
||||||
writel_relaxed(0, pcie->base + SMP8759_MUX);
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct pci_ecam_ops smp8759_ecam_ops = {
|
|
||||||
.pci_ops = {
|
|
||||||
.map_bus = pci_ecam_map_bus,
|
|
||||||
.read = smp8759_config_read,
|
|
||||||
.write = smp8759_config_write,
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
static int tango_pcie_link_up(struct tango_pcie *pcie)
|
|
||||||
{
|
|
||||||
void __iomem *test_out = pcie->base + SMP8759_TEST_OUT;
|
|
||||||
int i;
|
|
||||||
|
|
||||||
writel_relaxed(16, test_out);
|
|
||||||
for (i = 0; i < 10; ++i) {
|
|
||||||
u32 ltssm_state = readl_relaxed(test_out) >> 8;
|
|
||||||
if ((ltssm_state & 0x1f) == 0xf) /* L0 */
|
|
||||||
return 1;
|
|
||||||
usleep_range(3000, 4000);
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int tango_pcie_probe(struct platform_device *pdev)
|
|
||||||
{
|
|
||||||
struct device *dev = &pdev->dev;
|
|
||||||
struct tango_pcie *pcie;
|
|
||||||
struct resource *res;
|
|
||||||
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
|
|
||||||
struct irq_domain *msi_dom, *irq_dom;
|
|
||||||
struct of_pci_range_parser parser;
|
|
||||||
struct of_pci_range range;
|
|
||||||
int virq, offset;
|
|
||||||
|
|
||||||
dev_warn(dev, "simultaneous PCI config and MMIO accesses may cause data corruption\n");
|
|
||||||
add_taint(TAINT_CRAP, LOCKDEP_STILL_OK);
|
|
||||||
|
|
||||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
|
||||||
if (!pcie)
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
|
||||||
pcie->base = devm_ioremap_resource(dev, res);
|
|
||||||
if (IS_ERR(pcie->base))
|
|
||||||
return PTR_ERR(pcie->base);
|
|
||||||
|
|
||||||
platform_set_drvdata(pdev, pcie);
|
|
||||||
|
|
||||||
if (!tango_pcie_link_up(pcie))
|
|
||||||
return -ENODEV;
|
|
||||||
|
|
||||||
if (of_pci_dma_range_parser_init(&parser, dev->of_node) < 0)
|
|
||||||
return -ENOENT;
|
|
||||||
|
|
||||||
if (of_pci_range_parser_one(&parser, &range) == NULL)
|
|
||||||
return -ENOENT;
|
|
||||||
|
|
||||||
range.pci_addr += range.size;
|
|
||||||
pcie->msi_doorbell = range.pci_addr + res->start + SMP8759_DOORBELL;
|
|
||||||
|
|
||||||
for (offset = 0; offset < MSI_MAX / 8; offset += 4)
|
|
||||||
writel_relaxed(0, pcie->base + SMP8759_ENABLE + offset);
|
|
||||||
|
|
||||||
virq = platform_get_irq(pdev, 1);
|
|
||||||
if (virq < 0)
|
|
||||||
return virq;
|
|
||||||
|
|
||||||
irq_dom = irq_domain_create_linear(fwnode, MSI_MAX, &dom_ops, pcie);
|
|
||||||
if (!irq_dom) {
|
|
||||||
dev_err(dev, "Failed to create IRQ domain\n");
|
|
||||||
return -ENOMEM;
|
|
||||||
}
|
|
||||||
|
|
||||||
msi_dom = pci_msi_create_irq_domain(fwnode, &msi_dom_info, irq_dom);
|
|
||||||
if (!msi_dom) {
|
|
||||||
dev_err(dev, "Failed to create MSI domain\n");
|
|
||||||
irq_domain_remove(irq_dom);
|
|
||||||
return -ENOMEM;
|
|
||||||
}
|
|
||||||
|
|
||||||
pcie->dom = irq_dom;
|
|
||||||
spin_lock_init(&pcie->used_msi_lock);
|
|
||||||
irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie);
|
|
||||||
|
|
||||||
return pci_host_common_probe(pdev);
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct of_device_id tango_pcie_ids[] = {
|
|
||||||
{
|
|
||||||
.compatible = "sigma,smp8759-pcie",
|
|
||||||
.data = &smp8759_ecam_ops,
|
|
||||||
},
|
|
||||||
{ },
|
|
||||||
};
|
|
||||||
|
|
||||||
static struct platform_driver tango_pcie_driver = {
|
|
||||||
.probe = tango_pcie_probe,
|
|
||||||
.driver = {
|
|
||||||
.name = KBUILD_MODNAME,
|
|
||||||
.of_match_table = tango_pcie_ids,
|
|
||||||
.suppress_bind_attrs = true,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
builtin_platform_driver(tango_pcie_driver);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* The root complex advertises the wrong device class.
|
|
||||||
* Header Type 1 is for PCI-to-PCI bridges.
|
|
||||||
*/
|
|
||||||
static void tango_fixup_class(struct pci_dev *dev)
|
|
||||||
{
|
|
||||||
dev->class = PCI_CLASS_BRIDGE_PCI << 8;
|
|
||||||
}
|
|
||||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_class);
|
|
||||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_class);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* The root complex exposes a "fake" BAR, which is used to filter
|
|
||||||
* bus-to-system accesses. Only accesses within the range defined by this
|
|
||||||
* BAR are forwarded to the host, others are ignored.
|
|
||||||
*
|
|
||||||
* By default, the DMA framework expects an identity mapping, and DRAM0 is
|
|
||||||
* mapped at 0x80000000.
|
|
||||||
*/
|
|
||||||
static void tango_fixup_bar(struct pci_dev *dev)
|
|
||||||
{
|
|
||||||
dev->non_compliant_bars = true;
|
|
||||||
pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x80000000);
|
|
||||||
}
|
|
||||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_bar);
|
|
||||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_bar);
|
|
@ -404,6 +404,7 @@ static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie_port *port)
|
|||||||
return 0;
|
return 0;
|
||||||
out:
|
out:
|
||||||
xilinx_cpm_free_irq_domains(port);
|
xilinx_cpm_free_irq_domains(port);
|
||||||
|
of_node_put(pcie_intc_node);
|
||||||
dev_err(dev, "Failed to allocate IRQ domains\n");
|
dev_err(dev, "Failed to allocate IRQ domains\n");
|
||||||
|
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
@ -12,3 +12,16 @@ config PCI_EPF_TEST
|
|||||||
for PCI Endpoint.
|
for PCI Endpoint.
|
||||||
|
|
||||||
If in doubt, say "N" to disable Endpoint test driver.
|
If in doubt, say "N" to disable Endpoint test driver.
|
||||||
|
|
||||||
|
config PCI_EPF_NTB
|
||||||
|
tristate "PCI Endpoint NTB driver"
|
||||||
|
depends on PCI_ENDPOINT
|
||||||
|
select CONFIGFS_FS
|
||||||
|
help
|
||||||
|
Select this configuration option to enable the Non-Transparent
|
||||||
|
Bridge (NTB) driver for PCI Endpoint. NTB driver implements NTB
|
||||||
|
controller functionality using multiple PCIe endpoint instances.
|
||||||
|
It can support NTB endpoint function devices created using
|
||||||
|
device tree.
|
||||||
|
|
||||||
|
If in doubt, say "N" to disable Endpoint NTB driver.
|
||||||
|
@ -4,3 +4,4 @@
|
|||||||
#
|
#
|
||||||
|
|
||||||
obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
|
obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
|
||||||
|
obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o
|
||||||
|
2128
drivers/pci/endpoint/functions/pci-epf-ntb.c
Normal file
2128
drivers/pci/endpoint/functions/pci-epf-ntb.c
Normal file
File diff suppressed because it is too large
Load Diff
@ -619,7 +619,8 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
|
|||||||
|
|
||||||
if (epf_test->reg[bar]) {
|
if (epf_test->reg[bar]) {
|
||||||
pci_epc_clear_bar(epc, epf->func_no, epf_bar);
|
pci_epc_clear_bar(epc, epf->func_no, epf_bar);
|
||||||
pci_epf_free_space(epf, epf_test->reg[bar], bar);
|
pci_epf_free_space(epf, epf_test->reg[bar], bar,
|
||||||
|
PRIMARY_INTERFACE);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -651,7 +652,8 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
|
|||||||
|
|
||||||
ret = pci_epc_set_bar(epc, epf->func_no, epf_bar);
|
ret = pci_epc_set_bar(epc, epf->func_no, epf_bar);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
pci_epf_free_space(epf, epf_test->reg[bar], bar);
|
pci_epf_free_space(epf, epf_test->reg[bar], bar,
|
||||||
|
PRIMARY_INTERFACE);
|
||||||
dev_err(dev, "Failed to set BAR%d\n", bar);
|
dev_err(dev, "Failed to set BAR%d\n", bar);
|
||||||
if (bar == test_reg_bar)
|
if (bar == test_reg_bar)
|
||||||
return ret;
|
return ret;
|
||||||
@ -771,7 +773,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
|
|||||||
}
|
}
|
||||||
|
|
||||||
base = pci_epf_alloc_space(epf, test_reg_size, test_reg_bar,
|
base = pci_epf_alloc_space(epf, test_reg_size, test_reg_bar,
|
||||||
epc_features->align);
|
epc_features->align, PRIMARY_INTERFACE);
|
||||||
if (!base) {
|
if (!base) {
|
||||||
dev_err(dev, "Failed to allocated register space\n");
|
dev_err(dev, "Failed to allocated register space\n");
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
@ -789,7 +791,8 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
|
|||||||
continue;
|
continue;
|
||||||
|
|
||||||
base = pci_epf_alloc_space(epf, bar_size[bar], bar,
|
base = pci_epf_alloc_space(epf, bar_size[bar], bar,
|
||||||
epc_features->align);
|
epc_features->align,
|
||||||
|
PRIMARY_INTERFACE);
|
||||||
if (!base)
|
if (!base)
|
||||||
dev_err(dev, "Failed to allocate space for BAR%d\n",
|
dev_err(dev, "Failed to allocate space for BAR%d\n",
|
||||||
bar);
|
bar);
|
||||||
@ -834,6 +837,8 @@ static int pci_epf_test_bind(struct pci_epf *epf)
|
|||||||
linkup_notifier = epc_features->linkup_notifier;
|
linkup_notifier = epc_features->linkup_notifier;
|
||||||
core_init_notifier = epc_features->core_init_notifier;
|
core_init_notifier = epc_features->core_init_notifier;
|
||||||
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
|
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
|
||||||
|
if (test_reg_bar < 0)
|
||||||
|
return -EINVAL;
|
||||||
pci_epf_configure_bar(epf, epc_features);
|
pci_epf_configure_bar(epf, epc_features);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -21,6 +21,9 @@ static struct config_group *controllers_group;
|
|||||||
|
|
||||||
struct pci_epf_group {
|
struct pci_epf_group {
|
||||||
struct config_group group;
|
struct config_group group;
|
||||||
|
struct config_group primary_epc_group;
|
||||||
|
struct config_group secondary_epc_group;
|
||||||
|
struct delayed_work cfs_work;
|
||||||
struct pci_epf *epf;
|
struct pci_epf *epf;
|
||||||
int index;
|
int index;
|
||||||
};
|
};
|
||||||
@ -41,6 +44,127 @@ static inline struct pci_epc_group *to_pci_epc_group(struct config_item *item)
|
|||||||
return container_of(to_config_group(item), struct pci_epc_group, group);
|
return container_of(to_config_group(item), struct pci_epc_group, group);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int pci_secondary_epc_epf_link(struct config_item *epf_item,
|
||||||
|
struct config_item *epc_item)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||||
|
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||||
|
struct pci_epc *epc = epc_group->epc;
|
||||||
|
struct pci_epf *epf = epf_group->epf;
|
||||||
|
|
||||||
|
ret = pci_epc_add_epf(epc, epf, SECONDARY_INTERFACE);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = pci_epf_bind(epf);
|
||||||
|
if (ret) {
|
||||||
|
pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void pci_secondary_epc_epf_unlink(struct config_item *epc_item,
|
||||||
|
struct config_item *epf_item)
|
||||||
|
{
|
||||||
|
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||||
|
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||||
|
struct pci_epc *epc;
|
||||||
|
struct pci_epf *epf;
|
||||||
|
|
||||||
|
WARN_ON_ONCE(epc_group->start);
|
||||||
|
|
||||||
|
epc = epc_group->epc;
|
||||||
|
epf = epf_group->epf;
|
||||||
|
pci_epf_unbind(epf);
|
||||||
|
pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct configfs_item_operations pci_secondary_epc_item_ops = {
|
||||||
|
.allow_link = pci_secondary_epc_epf_link,
|
||||||
|
.drop_link = pci_secondary_epc_epf_unlink,
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct config_item_type pci_secondary_epc_type = {
|
||||||
|
.ct_item_ops = &pci_secondary_epc_item_ops,
|
||||||
|
.ct_owner = THIS_MODULE,
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct config_group
|
||||||
|
*pci_ep_cfs_add_secondary_group(struct pci_epf_group *epf_group)
|
||||||
|
{
|
||||||
|
struct config_group *secondary_epc_group;
|
||||||
|
|
||||||
|
secondary_epc_group = &epf_group->secondary_epc_group;
|
||||||
|
config_group_init_type_name(secondary_epc_group, "secondary",
|
||||||
|
&pci_secondary_epc_type);
|
||||||
|
configfs_register_group(&epf_group->group, secondary_epc_group);
|
||||||
|
|
||||||
|
return secondary_epc_group;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int pci_primary_epc_epf_link(struct config_item *epf_item,
|
||||||
|
struct config_item *epc_item)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||||
|
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||||
|
struct pci_epc *epc = epc_group->epc;
|
||||||
|
struct pci_epf *epf = epf_group->epf;
|
||||||
|
|
||||||
|
ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = pci_epf_bind(epf);
|
||||||
|
if (ret) {
|
||||||
|
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void pci_primary_epc_epf_unlink(struct config_item *epc_item,
|
||||||
|
struct config_item *epf_item)
|
||||||
|
{
|
||||||
|
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
|
||||||
|
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||||
|
struct pci_epc *epc;
|
||||||
|
struct pci_epf *epf;
|
||||||
|
|
||||||
|
WARN_ON_ONCE(epc_group->start);
|
||||||
|
|
||||||
|
epc = epc_group->epc;
|
||||||
|
epf = epf_group->epf;
|
||||||
|
pci_epf_unbind(epf);
|
||||||
|
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct configfs_item_operations pci_primary_epc_item_ops = {
|
||||||
|
.allow_link = pci_primary_epc_epf_link,
|
||||||
|
.drop_link = pci_primary_epc_epf_unlink,
|
||||||
|
};
|
||||||
|
|
||||||
|
static const struct config_item_type pci_primary_epc_type = {
|
||||||
|
.ct_item_ops = &pci_primary_epc_item_ops,
|
||||||
|
.ct_owner = THIS_MODULE,
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct config_group
|
||||||
|
*pci_ep_cfs_add_primary_group(struct pci_epf_group *epf_group)
|
||||||
|
{
|
||||||
|
struct config_group *primary_epc_group = &epf_group->primary_epc_group;
|
||||||
|
|
||||||
|
config_group_init_type_name(primary_epc_group, "primary",
|
||||||
|
&pci_primary_epc_type);
|
||||||
|
configfs_register_group(&epf_group->group, primary_epc_group);
|
||||||
|
|
||||||
|
return primary_epc_group;
|
||||||
|
}
|
||||||
|
|
||||||
static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
|
static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
|
||||||
size_t len)
|
size_t len)
|
||||||
{
|
{
|
||||||
@ -94,13 +218,13 @@ static int pci_epc_epf_link(struct config_item *epc_item,
|
|||||||
struct pci_epc *epc = epc_group->epc;
|
struct pci_epc *epc = epc_group->epc;
|
||||||
struct pci_epf *epf = epf_group->epf;
|
struct pci_epf *epf = epf_group->epf;
|
||||||
|
|
||||||
ret = pci_epc_add_epf(epc, epf);
|
ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
ret = pci_epf_bind(epf);
|
ret = pci_epf_bind(epf);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
pci_epc_remove_epf(epc, epf);
|
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -120,7 +244,7 @@ static void pci_epc_epf_unlink(struct config_item *epc_item,
|
|||||||
epc = epc_group->epc;
|
epc = epc_group->epc;
|
||||||
epf = epf_group->epf;
|
epf = epf_group->epf;
|
||||||
pci_epf_unbind(epf);
|
pci_epf_unbind(epf);
|
||||||
pci_epc_remove_epf(epc, epf);
|
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct configfs_item_operations pci_epc_item_ops = {
|
static struct configfs_item_operations pci_epc_item_ops = {
|
||||||
@ -366,12 +490,53 @@ static struct configfs_item_operations pci_epf_ops = {
|
|||||||
.release = pci_epf_release,
|
.release = pci_epf_release,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static struct config_group *pci_epf_type_make(struct config_group *group,
|
||||||
|
const char *name)
|
||||||
|
{
|
||||||
|
struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item);
|
||||||
|
struct config_group *epf_type_group;
|
||||||
|
|
||||||
|
epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group);
|
||||||
|
return epf_type_group;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void pci_epf_type_drop(struct config_group *group,
|
||||||
|
struct config_item *item)
|
||||||
|
{
|
||||||
|
config_item_put(item);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct configfs_group_operations pci_epf_type_group_ops = {
|
||||||
|
.make_group = &pci_epf_type_make,
|
||||||
|
.drop_item = &pci_epf_type_drop,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct config_item_type pci_epf_type = {
|
static const struct config_item_type pci_epf_type = {
|
||||||
|
.ct_group_ops = &pci_epf_type_group_ops,
|
||||||
.ct_item_ops = &pci_epf_ops,
|
.ct_item_ops = &pci_epf_ops,
|
||||||
.ct_attrs = pci_epf_attrs,
|
.ct_attrs = pci_epf_attrs,
|
||||||
.ct_owner = THIS_MODULE,
|
.ct_owner = THIS_MODULE,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static void pci_epf_cfs_work(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct pci_epf_group *epf_group;
|
||||||
|
struct config_group *group;
|
||||||
|
|
||||||
|
epf_group = container_of(work, struct pci_epf_group, cfs_work.work);
|
||||||
|
group = pci_ep_cfs_add_primary_group(epf_group);
|
||||||
|
if (IS_ERR(group)) {
|
||||||
|
pr_err("failed to create 'primary' EPC interface\n");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
group = pci_ep_cfs_add_secondary_group(epf_group);
|
||||||
|
if (IS_ERR(group)) {
|
||||||
|
pr_err("failed to create 'secondary' EPC interface\n");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static struct config_group *pci_epf_make(struct config_group *group,
|
static struct config_group *pci_epf_make(struct config_group *group,
|
||||||
const char *name)
|
const char *name)
|
||||||
{
|
{
|
||||||
@ -410,10 +575,15 @@ static struct config_group *pci_epf_make(struct config_group *group,
|
|||||||
goto free_name;
|
goto free_name;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
epf->group = &epf_group->group;
|
||||||
epf_group->epf = epf;
|
epf_group->epf = epf;
|
||||||
|
|
||||||
kfree(epf_name);
|
kfree(epf_name);
|
||||||
|
|
||||||
|
INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work);
|
||||||
|
queue_delayed_work(system_wq, &epf_group->cfs_work,
|
||||||
|
msecs_to_jiffies(1));
|
||||||
|
|
||||||
return &epf_group->group;
|
return &epf_group->group;
|
||||||
|
|
||||||
free_name:
|
free_name:
|
||||||
|
@ -87,24 +87,50 @@ EXPORT_SYMBOL_GPL(pci_epc_get);
|
|||||||
* pci_epc_get_first_free_bar() - helper to get first unreserved BAR
|
* pci_epc_get_first_free_bar() - helper to get first unreserved BAR
|
||||||
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
|
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
|
||||||
*
|
*
|
||||||
* Invoke to get the first unreserved BAR that can be used for endpoint
|
* Invoke to get the first unreserved BAR that can be used by the endpoint
|
||||||
* function. For any incorrect value in reserved_bar return '0'.
|
* function. For any incorrect value in reserved_bar return '0'.
|
||||||
*/
|
*/
|
||||||
unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
|
enum pci_barno
|
||||||
*epc_features)
|
pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features)
|
||||||
{
|
{
|
||||||
int free_bar;
|
return pci_epc_get_next_free_bar(epc_features, BAR_0);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pci_epc_get_next_free_bar() - helper to get unreserved BAR starting from @bar
|
||||||
|
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
|
||||||
|
* @bar: the starting BAR number from where unreserved BAR should be searched
|
||||||
|
*
|
||||||
|
* Invoke to get the next unreserved BAR starting from @bar that can be used
|
||||||
|
* for endpoint function. For any incorrect value in reserved_bar return '0'.
|
||||||
|
*/
|
||||||
|
enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
|
||||||
|
*epc_features, enum pci_barno bar)
|
||||||
|
{
|
||||||
|
unsigned long free_bar;
|
||||||
|
|
||||||
if (!epc_features)
|
if (!epc_features)
|
||||||
return 0;
|
return BAR_0;
|
||||||
|
|
||||||
free_bar = ffz(epc_features->reserved_bar);
|
/* If 'bar - 1' is a 64-bit BAR, move to the next BAR */
|
||||||
|
if ((epc_features->bar_fixed_64bit << 1) & 1 << bar)
|
||||||
|
bar++;
|
||||||
|
|
||||||
|
/* Find if the reserved BAR is also a 64-bit BAR */
|
||||||
|
free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit;
|
||||||
|
|
||||||
|
/* Set the adjacent bit if the reserved BAR is also a 64-bit BAR */
|
||||||
|
free_bar <<= 1;
|
||||||
|
free_bar |= epc_features->reserved_bar;
|
||||||
|
|
||||||
|
free_bar = find_next_zero_bit(&free_bar, 6, bar);
|
||||||
if (free_bar > 5)
|
if (free_bar > 5)
|
||||||
return 0;
|
return NO_BAR;
|
||||||
|
|
||||||
return free_bar;
|
return free_bar;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
|
EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pci_epc_get_features() - get the features supported by EPC
|
* pci_epc_get_features() - get the features supported by EPC
|
||||||
@ -204,6 +230,47 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
|
EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pci_epc_map_msi_irq() - Map physical address to MSI address and return
|
||||||
|
* MSI data
|
||||||
|
* @epc: the EPC device which has the MSI capability
|
||||||
|
* @func_no: the physical endpoint function number in the EPC device
|
||||||
|
* @phys_addr: the physical address of the outbound region
|
||||||
|
* @interrupt_num: the MSI interrupt number
|
||||||
|
* @entry_size: Size of Outbound address region for each interrupt
|
||||||
|
* @msi_data: the data that should be written in order to raise MSI interrupt
|
||||||
|
* with interrupt number as 'interrupt num'
|
||||||
|
* @msi_addr_offset: Offset of MSI address from the aligned outbound address
|
||||||
|
* to which the MSI address is mapped
|
||||||
|
*
|
||||||
|
* Invoke to map physical address to MSI address and return MSI data. The
|
||||||
|
* physical address should be an address in the outbound region. This is
|
||||||
|
* required to implement doorbell functionality of NTB wherein EPC on either
|
||||||
|
* side of the interface (primary and secondary) can directly write to the
|
||||||
|
* physical address (in outbound region) of the other interface to ring
|
||||||
|
* doorbell.
|
||||||
|
*/
|
||||||
|
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr,
|
||||||
|
u8 interrupt_num, u32 entry_size, u32 *msi_data,
|
||||||
|
u32 *msi_addr_offset)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (IS_ERR_OR_NULL(epc))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (!epc->ops->map_msi_irq)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
mutex_lock(&epc->lock);
|
||||||
|
ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num,
|
||||||
|
entry_size, msi_data, msi_addr_offset);
|
||||||
|
mutex_unlock(&epc->lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(pci_epc_map_msi_irq);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pci_epc_get_msi() - get the number of MSI interrupt numbers allocated
|
* pci_epc_get_msi() - get the number of MSI interrupt numbers allocated
|
||||||
* @epc: the EPC device to which MSI interrupts was requested
|
* @epc: the EPC device to which MSI interrupts was requested
|
||||||
@ -467,21 +534,28 @@ EXPORT_SYMBOL_GPL(pci_epc_write_header);
|
|||||||
* pci_epc_add_epf() - bind PCI endpoint function to an endpoint controller
|
* pci_epc_add_epf() - bind PCI endpoint function to an endpoint controller
|
||||||
* @epc: the EPC device to which the endpoint function should be added
|
* @epc: the EPC device to which the endpoint function should be added
|
||||||
* @epf: the endpoint function to be added
|
* @epf: the endpoint function to be added
|
||||||
|
* @type: Identifies if the EPC is connected to the primary or secondary
|
||||||
|
* interface of EPF
|
||||||
*
|
*
|
||||||
* A PCI endpoint device can have one or more functions. In the case of PCIe,
|
* A PCI endpoint device can have one or more functions. In the case of PCIe,
|
||||||
* the specification allows up to 8 PCIe endpoint functions. Invoke
|
* the specification allows up to 8 PCIe endpoint functions. Invoke
|
||||||
* pci_epc_add_epf() to add a PCI endpoint function to an endpoint controller.
|
* pci_epc_add_epf() to add a PCI endpoint function to an endpoint controller.
|
||||||
*/
|
*/
|
||||||
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
|
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf,
|
||||||
|
enum pci_epc_interface_type type)
|
||||||
{
|
{
|
||||||
|
struct list_head *list;
|
||||||
u32 func_no;
|
u32 func_no;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
if (epf->epc)
|
if (IS_ERR_OR_NULL(epc))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (type == PRIMARY_INTERFACE && epf->epc)
|
||||||
return -EBUSY;
|
return -EBUSY;
|
||||||
|
|
||||||
if (IS_ERR(epc))
|
if (type == SECONDARY_INTERFACE && epf->sec_epc)
|
||||||
return -EINVAL;
|
return -EBUSY;
|
||||||
|
|
||||||
mutex_lock(&epc->lock);
|
mutex_lock(&epc->lock);
|
||||||
func_no = find_first_zero_bit(&epc->function_num_map,
|
func_no = find_first_zero_bit(&epc->function_num_map,
|
||||||
@ -498,11 +572,17 @@ int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
|
|||||||
}
|
}
|
||||||
|
|
||||||
set_bit(func_no, &epc->function_num_map);
|
set_bit(func_no, &epc->function_num_map);
|
||||||
epf->func_no = func_no;
|
if (type == PRIMARY_INTERFACE) {
|
||||||
epf->epc = epc;
|
epf->func_no = func_no;
|
||||||
|
epf->epc = epc;
|
||||||
list_add_tail(&epf->list, &epc->pci_epf);
|
list = &epf->list;
|
||||||
|
} else {
|
||||||
|
epf->sec_epc_func_no = func_no;
|
||||||
|
epf->sec_epc = epc;
|
||||||
|
list = &epf->sec_epc_list;
|
||||||
|
}
|
||||||
|
|
||||||
|
list_add_tail(list, &epc->pci_epf);
|
||||||
ret:
|
ret:
|
||||||
mutex_unlock(&epc->lock);
|
mutex_unlock(&epc->lock);
|
||||||
|
|
||||||
@ -517,14 +597,26 @@ EXPORT_SYMBOL_GPL(pci_epc_add_epf);
|
|||||||
*
|
*
|
||||||
* Invoke to remove PCI endpoint function from the endpoint controller.
|
* Invoke to remove PCI endpoint function from the endpoint controller.
|
||||||
*/
|
*/
|
||||||
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf)
|
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
|
||||||
|
enum pci_epc_interface_type type)
|
||||||
{
|
{
|
||||||
|
struct list_head *list;
|
||||||
|
u32 func_no = 0;
|
||||||
|
|
||||||
if (!epc || IS_ERR(epc) || !epf)
|
if (!epc || IS_ERR(epc) || !epf)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
if (type == PRIMARY_INTERFACE) {
|
||||||
|
func_no = epf->func_no;
|
||||||
|
list = &epf->list;
|
||||||
|
} else {
|
||||||
|
func_no = epf->sec_epc_func_no;
|
||||||
|
list = &epf->sec_epc_list;
|
||||||
|
}
|
||||||
|
|
||||||
mutex_lock(&epc->lock);
|
mutex_lock(&epc->lock);
|
||||||
clear_bit(epf->func_no, &epc->function_num_map);
|
clear_bit(func_no, &epc->function_num_map);
|
||||||
list_del(&epf->list);
|
list_del(list);
|
||||||
epf->epc = NULL;
|
epf->epc = NULL;
|
||||||
mutex_unlock(&epc->lock);
|
mutex_unlock(&epc->lock);
|
||||||
}
|
}
|
||||||
|
@ -20,6 +20,38 @@ static DEFINE_MUTEX(pci_epf_mutex);
|
|||||||
static struct bus_type pci_epf_bus_type;
|
static struct bus_type pci_epf_bus_type;
|
||||||
static const struct device_type pci_epf_type;
|
static const struct device_type pci_epf_type;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pci_epf_type_add_cfs() - Help function drivers to expose function specific
|
||||||
|
* attributes in configfs
|
||||||
|
* @epf: the EPF device that has to be configured using configfs
|
||||||
|
* @group: the parent configfs group (corresponding to entries in
|
||||||
|
* pci_epf_device_id)
|
||||||
|
*
|
||||||
|
* Invoke to expose function specific attributes in configfs. If the function
|
||||||
|
* driver does not have anything to expose (attributes configured by user),
|
||||||
|
* return NULL.
|
||||||
|
*/
|
||||||
|
struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
|
||||||
|
struct config_group *group)
|
||||||
|
{
|
||||||
|
struct config_group *epf_type_group;
|
||||||
|
|
||||||
|
if (!epf->driver) {
|
||||||
|
dev_err(&epf->dev, "epf device not bound to driver\n");
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!epf->driver->ops->add_cfs)
|
||||||
|
return NULL;
|
||||||
|
|
||||||
|
mutex_lock(&epf->lock);
|
||||||
|
epf_type_group = epf->driver->ops->add_cfs(epf, group);
|
||||||
|
mutex_unlock(&epf->lock);
|
||||||
|
|
||||||
|
return epf_type_group;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pci_epf_unbind() - Notify the function driver that the binding between the
|
* pci_epf_unbind() - Notify the function driver that the binding between the
|
||||||
* EPF device and EPC device has been lost
|
* EPF device and EPC device has been lost
|
||||||
@ -74,24 +106,37 @@ EXPORT_SYMBOL_GPL(pci_epf_bind);
|
|||||||
* @epf: the EPF device from whom to free the memory
|
* @epf: the EPF device from whom to free the memory
|
||||||
* @addr: the virtual address of the PCI EPF register space
|
* @addr: the virtual address of the PCI EPF register space
|
||||||
* @bar: the BAR number corresponding to the register space
|
* @bar: the BAR number corresponding to the register space
|
||||||
|
* @type: Identifies if the allocated space is for primary EPC or secondary EPC
|
||||||
*
|
*
|
||||||
* Invoke to free the allocated PCI EPF register space.
|
* Invoke to free the allocated PCI EPF register space.
|
||||||
*/
|
*/
|
||||||
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar)
|
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
|
||||||
|
enum pci_epc_interface_type type)
|
||||||
{
|
{
|
||||||
struct device *dev = epf->epc->dev.parent;
|
struct device *dev = epf->epc->dev.parent;
|
||||||
|
struct pci_epf_bar *epf_bar;
|
||||||
|
struct pci_epc *epc;
|
||||||
|
|
||||||
if (!addr)
|
if (!addr)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
dma_free_coherent(dev, epf->bar[bar].size, addr,
|
if (type == PRIMARY_INTERFACE) {
|
||||||
epf->bar[bar].phys_addr);
|
epc = epf->epc;
|
||||||
|
epf_bar = epf->bar;
|
||||||
|
} else {
|
||||||
|
epc = epf->sec_epc;
|
||||||
|
epf_bar = epf->sec_epc_bar;
|
||||||
|
}
|
||||||
|
|
||||||
epf->bar[bar].phys_addr = 0;
|
dev = epc->dev.parent;
|
||||||
epf->bar[bar].addr = NULL;
|
dma_free_coherent(dev, epf_bar[bar].size, addr,
|
||||||
epf->bar[bar].size = 0;
|
epf_bar[bar].phys_addr);
|
||||||
epf->bar[bar].barno = 0;
|
|
||||||
epf->bar[bar].flags = 0;
|
epf_bar[bar].phys_addr = 0;
|
||||||
|
epf_bar[bar].addr = NULL;
|
||||||
|
epf_bar[bar].size = 0;
|
||||||
|
epf_bar[bar].barno = 0;
|
||||||
|
epf_bar[bar].flags = 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pci_epf_free_space);
|
EXPORT_SYMBOL_GPL(pci_epf_free_space);
|
||||||
|
|
||||||
@ -101,15 +146,18 @@ EXPORT_SYMBOL_GPL(pci_epf_free_space);
|
|||||||
* @size: the size of the memory that has to be allocated
|
* @size: the size of the memory that has to be allocated
|
||||||
* @bar: the BAR number corresponding to the allocated register space
|
* @bar: the BAR number corresponding to the allocated register space
|
||||||
* @align: alignment size for the allocation region
|
* @align: alignment size for the allocation region
|
||||||
|
* @type: Identifies if the allocation is for primary EPC or secondary EPC
|
||||||
*
|
*
|
||||||
* Invoke to allocate memory for the PCI EPF register space.
|
* Invoke to allocate memory for the PCI EPF register space.
|
||||||
*/
|
*/
|
||||||
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
||||||
size_t align)
|
size_t align, enum pci_epc_interface_type type)
|
||||||
{
|
{
|
||||||
void *space;
|
struct pci_epf_bar *epf_bar;
|
||||||
struct device *dev = epf->epc->dev.parent;
|
|
||||||
dma_addr_t phys_addr;
|
dma_addr_t phys_addr;
|
||||||
|
struct pci_epc *epc;
|
||||||
|
struct device *dev;
|
||||||
|
void *space;
|
||||||
|
|
||||||
if (size < 128)
|
if (size < 128)
|
||||||
size = 128;
|
size = 128;
|
||||||
@ -119,17 +167,26 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
|||||||
else
|
else
|
||||||
size = roundup_pow_of_two(size);
|
size = roundup_pow_of_two(size);
|
||||||
|
|
||||||
|
if (type == PRIMARY_INTERFACE) {
|
||||||
|
epc = epf->epc;
|
||||||
|
epf_bar = epf->bar;
|
||||||
|
} else {
|
||||||
|
epc = epf->sec_epc;
|
||||||
|
epf_bar = epf->sec_epc_bar;
|
||||||
|
}
|
||||||
|
|
||||||
|
dev = epc->dev.parent;
|
||||||
space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
|
space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
|
||||||
if (!space) {
|
if (!space) {
|
||||||
dev_err(dev, "failed to allocate mem space\n");
|
dev_err(dev, "failed to allocate mem space\n");
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
epf->bar[bar].phys_addr = phys_addr;
|
epf_bar[bar].phys_addr = phys_addr;
|
||||||
epf->bar[bar].addr = space;
|
epf_bar[bar].addr = space;
|
||||||
epf->bar[bar].size = size;
|
epf_bar[bar].size = size;
|
||||||
epf->bar[bar].barno = bar;
|
epf_bar[bar].barno = bar;
|
||||||
epf->bar[bar].flags |= upper_32_bits(size) ?
|
epf_bar[bar].flags |= upper_32_bits(size) ?
|
||||||
PCI_BASE_ADDRESS_MEM_TYPE_64 :
|
PCI_BASE_ADDRESS_MEM_TYPE_64 :
|
||||||
PCI_BASE_ADDRESS_MEM_TYPE_32;
|
PCI_BASE_ADDRESS_MEM_TYPE_32;
|
||||||
|
|
||||||
@ -282,22 +339,6 @@ struct pci_epf *pci_epf_create(const char *name)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pci_epf_create);
|
EXPORT_SYMBOL_GPL(pci_epf_create);
|
||||||
|
|
||||||
const struct pci_epf_device_id *
|
|
||||||
pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf)
|
|
||||||
{
|
|
||||||
if (!id || !epf)
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
while (*id->name) {
|
|
||||||
if (strcmp(epf->name, id->name) == 0)
|
|
||||||
return id;
|
|
||||||
id++;
|
|
||||||
}
|
|
||||||
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(pci_epf_match_device);
|
|
||||||
|
|
||||||
static void pci_epf_dev_release(struct device *dev)
|
static void pci_epf_dev_release(struct device *dev)
|
||||||
{
|
{
|
||||||
struct pci_epf *epf = to_pci_epf(dev);
|
struct pci_epf *epf = to_pci_epf(dev);
|
||||||
|
@ -176,9 +176,6 @@ int acpiphp_unregister_attention(struct acpiphp_attention_info *info);
|
|||||||
int acpiphp_register_hotplug_slot(struct acpiphp_slot *slot, unsigned int sun);
|
int acpiphp_register_hotplug_slot(struct acpiphp_slot *slot, unsigned int sun);
|
||||||
void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
|
void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
|
||||||
|
|
||||||
/* acpiphp_glue.c */
|
|
||||||
typedef int (*acpiphp_callback)(struct acpiphp_slot *slot, void *data);
|
|
||||||
|
|
||||||
int acpiphp_enable_slot(struct acpiphp_slot *slot);
|
int acpiphp_enable_slot(struct acpiphp_slot *slot);
|
||||||
int acpiphp_disable_slot(struct acpiphp_slot *slot);
|
int acpiphp_disable_slot(struct acpiphp_slot *slot);
|
||||||
u8 acpiphp_get_power_status(struct acpiphp_slot *slot);
|
u8 acpiphp_get_power_status(struct acpiphp_slot *slot);
|
||||||
|
@ -21,8 +21,9 @@
|
|||||||
#include "pci-bridge-emul.h"
|
#include "pci-bridge-emul.h"
|
||||||
|
|
||||||
#define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF
|
#define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF
|
||||||
|
#define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2)
|
||||||
#define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END
|
#define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END
|
||||||
#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
|
#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct pci_bridge_reg_behavior - register bits behaviors
|
* struct pci_bridge_reg_behavior - register bits behaviors
|
||||||
@ -46,7 +47,8 @@ struct pci_bridge_reg_behavior {
|
|||||||
u32 w1c;
|
u32 w1c;
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
|
static const
|
||||||
|
struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
|
||||||
[PCI_VENDOR_ID / 4] = { .ro = ~0 },
|
[PCI_VENDOR_ID / 4] = { .ro = ~0 },
|
||||||
[PCI_COMMAND / 4] = {
|
[PCI_COMMAND / 4] = {
|
||||||
.rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
|
.rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
|
||||||
@ -164,7 +166,8 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
|
static const
|
||||||
|
struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] = {
|
||||||
[PCI_CAP_LIST_ID / 4] = {
|
[PCI_CAP_LIST_ID / 4] = {
|
||||||
/*
|
/*
|
||||||
* Capability ID, Next Capability Pointer and
|
* Capability ID, Next Capability Pointer and
|
||||||
@ -260,6 +263,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
|
|||||||
int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
|
int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
|
||||||
unsigned int flags)
|
unsigned int flags)
|
||||||
{
|
{
|
||||||
|
BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END);
|
||||||
|
|
||||||
bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16);
|
bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16);
|
||||||
bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
|
bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
|
||||||
bridge->conf.cache_line_size = 0x10;
|
bridge->conf.cache_line_size = 0x10;
|
||||||
|
@ -4030,6 +4030,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
|
|||||||
ret = logic_pio_register_range(range);
|
ret = logic_pio_register_range(range);
|
||||||
if (ret)
|
if (ret)
|
||||||
kfree(range);
|
kfree(range);
|
||||||
|
|
||||||
|
/* Ignore duplicates due to deferred probing */
|
||||||
|
if (ret == -EEXIST)
|
||||||
|
ret = 0;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -133,14 +133,6 @@ config PCIE_PTM
|
|||||||
This is only useful if you have devices that support PTM, but it
|
This is only useful if you have devices that support PTM, but it
|
||||||
is safe to enable even if you don't.
|
is safe to enable even if you don't.
|
||||||
|
|
||||||
config PCIE_BW
|
|
||||||
bool "PCI Express Bandwidth Change Notification"
|
|
||||||
depends on PCIEPORTBUS
|
|
||||||
help
|
|
||||||
This enables PCI Express Bandwidth Change Notification. If
|
|
||||||
you know link width or rate changes occur only to correct
|
|
||||||
unreliable links, you may answer Y.
|
|
||||||
|
|
||||||
config PCIE_EDR
|
config PCIE_EDR
|
||||||
bool "PCI Express Error Disconnect Recover support"
|
bool "PCI Express Error Disconnect Recover support"
|
||||||
depends on PCIE_DPC && ACPI
|
depends on PCIE_DPC && ACPI
|
||||||
|
@ -12,5 +12,4 @@ obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
|
|||||||
obj-$(CONFIG_PCIE_PME) += pme.o
|
obj-$(CONFIG_PCIE_PME) += pme.o
|
||||||
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
||||||
obj-$(CONFIG_PCIE_PTM) += ptm.o
|
obj-$(CONFIG_PCIE_PTM) += ptm.o
|
||||||
obj-$(CONFIG_PCIE_BW) += bw_notification.o
|
|
||||||
obj-$(CONFIG_PCIE_EDR) += edr.o
|
obj-$(CONFIG_PCIE_EDR) += edr.o
|
||||||
|
@ -1388,7 +1388,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
|
|||||||
if (type == PCI_EXP_TYPE_RC_END)
|
if (type == PCI_EXP_TYPE_RC_END)
|
||||||
root = dev->rcec;
|
root = dev->rcec;
|
||||||
else
|
else
|
||||||
root = dev;
|
root = pcie_find_root_port(dev);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the platform retained control of AER, an RCiEP may not have
|
* If the platform retained control of AER, an RCiEP may not have
|
||||||
@ -1414,7 +1414,8 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
|
|||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
rc = pci_bus_error_reset(dev);
|
rc = pci_bus_error_reset(dev);
|
||||||
pci_info(dev, "Root Port link has been reset (%d)\n", rc);
|
pci_info(dev, "%s Port link has been reset (%d)\n",
|
||||||
|
pci_is_root_bus(dev->bus) ? "Root" : "Downstream", rc);
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((host->native_aer || pcie_ports_native) && aer) {
|
if ((host->native_aer || pcie_ports_native) && aer) {
|
||||||
|
@ -1,138 +0,0 @@
|
|||||||
// SPDX-License-Identifier: GPL-2.0+
|
|
||||||
/*
|
|
||||||
* PCI Express Link Bandwidth Notification services driver
|
|
||||||
* Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
|
|
||||||
*
|
|
||||||
* Copyright (C) 2019, Dell Inc
|
|
||||||
*
|
|
||||||
* The PCIe Link Bandwidth Notification provides a way to notify the
|
|
||||||
* operating system when the link width or data rate changes. This
|
|
||||||
* capability is required for all root ports and downstream ports
|
|
||||||
* supporting links wider than x1 and/or multiple link speeds.
|
|
||||||
*
|
|
||||||
* This service port driver hooks into the bandwidth notification interrupt
|
|
||||||
* and warns when links become degraded in operation.
|
|
||||||
*/
|
|
||||||
|
|
||||||
#define dev_fmt(fmt) "bw_notification: " fmt
|
|
||||||
|
|
||||||
#include "../pci.h"
|
|
||||||
#include "portdrv.h"
|
|
||||||
|
|
||||||
static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
|
|
||||||
{
|
|
||||||
int ret;
|
|
||||||
u32 lnk_cap;
|
|
||||||
|
|
||||||
ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap);
|
|
||||||
return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev)
|
|
||||||
{
|
|
||||||
u16 lnk_ctl;
|
|
||||||
|
|
||||||
pcie_capability_write_word(dev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
|
|
||||||
|
|
||||||
pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
|
||||||
lnk_ctl |= PCI_EXP_LNKCTL_LBMIE;
|
|
||||||
pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev)
|
|
||||||
{
|
|
||||||
u16 lnk_ctl;
|
|
||||||
|
|
||||||
pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
|
||||||
lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE;
|
|
||||||
pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
|
||||||
}
|
|
||||||
|
|
||||||
static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
|
|
||||||
{
|
|
||||||
struct pcie_device *srv = context;
|
|
||||||
struct pci_dev *port = srv->port;
|
|
||||||
u16 link_status, events;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
|
|
||||||
events = link_status & PCI_EXP_LNKSTA_LBMS;
|
|
||||||
|
|
||||||
if (ret != PCIBIOS_SUCCESSFUL || !events)
|
|
||||||
return IRQ_NONE;
|
|
||||||
|
|
||||||
pcie_capability_write_word(port, PCI_EXP_LNKSTA, events);
|
|
||||||
pcie_update_link_speed(port->subordinate, link_status);
|
|
||||||
return IRQ_WAKE_THREAD;
|
|
||||||
}
|
|
||||||
|
|
||||||
static irqreturn_t pcie_bw_notification_handler(int irq, void *context)
|
|
||||||
{
|
|
||||||
struct pcie_device *srv = context;
|
|
||||||
struct pci_dev *port = srv->port;
|
|
||||||
struct pci_dev *dev;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Print status from downstream devices, not this root port or
|
|
||||||
* downstream switch port.
|
|
||||||
*/
|
|
||||||
down_read(&pci_bus_sem);
|
|
||||||
list_for_each_entry(dev, &port->subordinate->devices, bus_list)
|
|
||||||
pcie_report_downtraining(dev);
|
|
||||||
up_read(&pci_bus_sem);
|
|
||||||
|
|
||||||
return IRQ_HANDLED;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
|
|
||||||
{
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
/* Single-width or single-speed ports do not have to support this. */
|
|
||||||
if (!pcie_link_bandwidth_notification_supported(srv->port))
|
|
||||||
return -ENODEV;
|
|
||||||
|
|
||||||
ret = request_threaded_irq(srv->irq, pcie_bw_notification_irq,
|
|
||||||
pcie_bw_notification_handler,
|
|
||||||
IRQF_SHARED, "PCIe BW notif", srv);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
pcie_enable_link_bandwidth_notification(srv->port);
|
|
||||||
pci_info(srv->port, "enabled with IRQ %d\n", srv->irq);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
|
|
||||||
{
|
|
||||||
pcie_disable_link_bandwidth_notification(srv->port);
|
|
||||||
free_irq(srv->irq, srv);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
|
|
||||||
{
|
|
||||||
pcie_disable_link_bandwidth_notification(srv->port);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
|
|
||||||
{
|
|
||||||
pcie_enable_link_bandwidth_notification(srv->port);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
|
|
||||||
.name = "pcie_bw_notification",
|
|
||||||
.port_type = PCIE_ANY_PORT,
|
|
||||||
.service = PCIE_PORT_SERVICE_BWNOTIF,
|
|
||||||
.probe = pcie_bandwidth_notification_probe,
|
|
||||||
.suspend = pcie_bandwidth_notification_suspend,
|
|
||||||
.resume = pcie_bandwidth_notification_resume,
|
|
||||||
.remove = pcie_bandwidth_notification_remove,
|
|
||||||
};
|
|
||||||
|
|
||||||
int __init pcie_bandwidth_notification_init(void)
|
|
||||||
{
|
|
||||||
return pcie_port_service_register(&pcie_bandwidth_notification_driver);
|
|
||||||
}
|
|
@ -198,8 +198,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
|||||||
pci_dbg(bridge, "broadcast error_detected message\n");
|
pci_dbg(bridge, "broadcast error_detected message\n");
|
||||||
if (state == pci_channel_io_frozen) {
|
if (state == pci_channel_io_frozen) {
|
||||||
pci_walk_bridge(bridge, report_frozen_detected, &status);
|
pci_walk_bridge(bridge, report_frozen_detected, &status);
|
||||||
status = reset_subordinates(bridge);
|
if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) {
|
||||||
if (status != PCI_ERS_RESULT_RECOVERED) {
|
|
||||||
pci_warn(bridge, "subordinate device reset failed\n");
|
pci_warn(bridge, "subordinate device reset failed\n");
|
||||||
goto failed;
|
goto failed;
|
||||||
}
|
}
|
||||||
@ -231,15 +230,14 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
|||||||
pci_walk_bridge(bridge, report_resume, &status);
|
pci_walk_bridge(bridge, report_resume, &status);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If we have native control of AER, clear error status in the Root
|
* If we have native control of AER, clear error status in the device
|
||||||
* Port or Downstream Port that signaled the error. If the
|
* that detected the error. If the platform retained control of AER,
|
||||||
* platform retained control of AER, it is responsible for clearing
|
* it is responsible for clearing this status. In that case, the
|
||||||
* this status. In that case, the signaling device may not even be
|
* signaling device may not even be visible to the OS.
|
||||||
* visible to the OS.
|
|
||||||
*/
|
*/
|
||||||
if (host->native_aer || pcie_ports_native) {
|
if (host->native_aer || pcie_ports_native) {
|
||||||
pcie_clear_device_status(bridge);
|
pcie_clear_device_status(dev);
|
||||||
pci_aer_clear_nonfatal_status(bridge);
|
pci_aer_clear_nonfatal_status(dev);
|
||||||
}
|
}
|
||||||
pci_info(bridge, "device recovery successful\n");
|
pci_info(bridge, "device recovery successful\n");
|
||||||
return status;
|
return status;
|
||||||
|
@ -53,12 +53,6 @@ int pcie_dpc_init(void);
|
|||||||
static inline int pcie_dpc_init(void) { return 0; }
|
static inline int pcie_dpc_init(void) { return 0; }
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_PCIE_BW
|
|
||||||
int pcie_bandwidth_notification_init(void);
|
|
||||||
#else
|
|
||||||
static inline int pcie_bandwidth_notification_init(void) { return 0; }
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* Port Type */
|
/* Port Type */
|
||||||
#define PCIE_ANY_PORT (~0)
|
#define PCIE_ANY_PORT (~0)
|
||||||
|
|
||||||
|
@ -153,7 +153,8 @@ static void pcie_portdrv_remove(struct pci_dev *dev)
|
|||||||
static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
|
static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
|
||||||
pci_channel_state_t error)
|
pci_channel_state_t error)
|
||||||
{
|
{
|
||||||
/* Root Port has no impact. Always recovers. */
|
if (error == pci_channel_io_frozen)
|
||||||
|
return PCI_ERS_RESULT_NEED_RESET;
|
||||||
return PCI_ERS_RESULT_CAN_RECOVER;
|
return PCI_ERS_RESULT_CAN_RECOVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -255,7 +256,6 @@ static void __init pcie_init_services(void)
|
|||||||
pcie_pme_init();
|
pcie_pme_init();
|
||||||
pcie_dpc_init();
|
pcie_dpc_init();
|
||||||
pcie_hp_init();
|
pcie_hp_init();
|
||||||
pcie_bandwidth_notification_init();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __init pcie_portdrv_init(void)
|
static int __init pcie_portdrv_init(void)
|
||||||
|
@ -168,7 +168,6 @@ struct pci_bus *pci_find_next_bus(const struct pci_bus *from)
|
|||||||
struct list_head *n;
|
struct list_head *n;
|
||||||
struct pci_bus *b = NULL;
|
struct pci_bus *b = NULL;
|
||||||
|
|
||||||
WARN_ON(in_interrupt());
|
|
||||||
down_read(&pci_bus_sem);
|
down_read(&pci_bus_sem);
|
||||||
n = from ? from->node.next : pci_root_buses.next;
|
n = from ? from->node.next : pci_root_buses.next;
|
||||||
if (n != &pci_root_buses)
|
if (n != &pci_root_buses)
|
||||||
@ -196,7 +195,6 @@ struct pci_dev *pci_get_slot(struct pci_bus *bus, unsigned int devfn)
|
|||||||
{
|
{
|
||||||
struct pci_dev *dev;
|
struct pci_dev *dev;
|
||||||
|
|
||||||
WARN_ON(in_interrupt());
|
|
||||||
down_read(&pci_bus_sem);
|
down_read(&pci_bus_sem);
|
||||||
|
|
||||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||||
@ -274,7 +272,6 @@ static struct pci_dev *pci_get_dev_by_id(const struct pci_device_id *id,
|
|||||||
struct device *dev_start = NULL;
|
struct device *dev_start = NULL;
|
||||||
struct pci_dev *pdev = NULL;
|
struct pci_dev *pdev = NULL;
|
||||||
|
|
||||||
WARN_ON(in_interrupt());
|
|
||||||
if (from)
|
if (from)
|
||||||
dev_start = &from->dev;
|
dev_start = &from->dev;
|
||||||
dev = bus_find_device(&pci_bus_type, dev_start, (void *)id,
|
dev = bus_find_device(&pci_bus_type, dev_start, (void *)id,
|
||||||
@ -381,7 +378,6 @@ int pci_dev_present(const struct pci_device_id *ids)
|
|||||||
{
|
{
|
||||||
struct pci_dev *found = NULL;
|
struct pci_dev *found = NULL;
|
||||||
|
|
||||||
WARN_ON(in_interrupt());
|
|
||||||
while (ids->vendor || ids->subvendor || ids->class_mask) {
|
while (ids->vendor || ids->subvendor || ids->class_mask) {
|
||||||
found = pci_get_dev_by_id(ids, NULL);
|
found = pci_get_dev_by_id(ids, NULL);
|
||||||
if (found) {
|
if (found) {
|
||||||
|
@ -410,10 +410,16 @@ EXPORT_SYMBOL(pci_release_resource);
|
|||||||
int pci_resize_resource(struct pci_dev *dev, int resno, int size)
|
int pci_resize_resource(struct pci_dev *dev, int resno, int size)
|
||||||
{
|
{
|
||||||
struct resource *res = dev->resource + resno;
|
struct resource *res = dev->resource + resno;
|
||||||
|
struct pci_host_bridge *host;
|
||||||
int old, ret;
|
int old, ret;
|
||||||
u32 sizes;
|
u32 sizes;
|
||||||
u16 cmd;
|
u16 cmd;
|
||||||
|
|
||||||
|
/* Check if we must preserve the firmware's resource assignment */
|
||||||
|
host = pci_find_host_bridge(dev->bus);
|
||||||
|
if (host->preserve_config)
|
||||||
|
return -ENOTSUPP;
|
||||||
|
|
||||||
/* Make sure the resource isn't assigned before resizing it. */
|
/* Make sure the resource isn't assigned before resizing it. */
|
||||||
if (!(res->flags & IORESOURCE_UNSET))
|
if (!(res->flags & IORESOURCE_UNSET))
|
||||||
return -EBUSY;
|
return -EBUSY;
|
||||||
|
@ -20,7 +20,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
|
|||||||
u16 word;
|
u16 word;
|
||||||
u32 dword;
|
u32 dword;
|
||||||
long err;
|
long err;
|
||||||
long cfg_ret;
|
int cfg_ret;
|
||||||
|
|
||||||
if (!capable(CAP_SYS_ADMIN))
|
if (!capable(CAP_SYS_ADMIN))
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
@ -46,7 +46,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
|
|||||||
}
|
}
|
||||||
|
|
||||||
err = -EIO;
|
err = -EIO;
|
||||||
if (cfg_ret != PCIBIOS_SUCCESSFUL)
|
if (cfg_ret)
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
switch (len) {
|
switch (len) {
|
||||||
@ -105,7 +105,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
|
|||||||
if (err)
|
if (err)
|
||||||
break;
|
break;
|
||||||
err = pci_user_write_config_byte(dev, off, byte);
|
err = pci_user_write_config_byte(dev, off, byte);
|
||||||
if (err != PCIBIOS_SUCCESSFUL)
|
if (err)
|
||||||
err = -EIO;
|
err = -EIO;
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@ -114,7 +114,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
|
|||||||
if (err)
|
if (err)
|
||||||
break;
|
break;
|
||||||
err = pci_user_write_config_word(dev, off, word);
|
err = pci_user_write_config_word(dev, off, word);
|
||||||
if (err != PCIBIOS_SUCCESSFUL)
|
if (err)
|
||||||
err = -EIO;
|
err = -EIO;
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@ -123,7 +123,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
|
|||||||
if (err)
|
if (err)
|
||||||
break;
|
break;
|
||||||
err = pci_user_write_config_dword(dev, off, dword);
|
err = pci_user_write_config_dword(dev, off, dword);
|
||||||
if (err != PCIBIOS_SUCCESSFUL)
|
if (err)
|
||||||
err = -EIO;
|
err = -EIO;
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
@ -591,9 +591,6 @@ extern u32 osc_sb_native_usb4_control;
|
|||||||
#define ACPI_GSB_ACCESS_ATTRIB_RAW_BYTES 0x0000000E
|
#define ACPI_GSB_ACCESS_ATTRIB_RAW_BYTES 0x0000000E
|
||||||
#define ACPI_GSB_ACCESS_ATTRIB_RAW_PROCESS 0x0000000F
|
#define ACPI_GSB_ACCESS_ATTRIB_RAW_PROCESS 0x0000000F
|
||||||
|
|
||||||
extern acpi_status acpi_pci_osc_control_set(acpi_handle handle,
|
|
||||||
u32 *mask, u32 req);
|
|
||||||
|
|
||||||
/* Enable _OST when all relevant hotplug operations are enabled */
|
/* Enable _OST when all relevant hotplug operations are enabled */
|
||||||
#if defined(CONFIG_ACPI_HOTPLUG_CPU) && \
|
#if defined(CONFIG_ACPI_HOTPLUG_CPU) && \
|
||||||
defined(CONFIG_ACPI_HOTPLUG_MEMORY) && \
|
defined(CONFIG_ACPI_HOTPLUG_MEMORY) && \
|
||||||
|
@ -13,6 +13,12 @@
|
|||||||
|
|
||||||
struct pci_epc;
|
struct pci_epc;
|
||||||
|
|
||||||
|
enum pci_epc_interface_type {
|
||||||
|
UNKNOWN_INTERFACE = -1,
|
||||||
|
PRIMARY_INTERFACE,
|
||||||
|
SECONDARY_INTERFACE,
|
||||||
|
};
|
||||||
|
|
||||||
enum pci_epc_irq_type {
|
enum pci_epc_irq_type {
|
||||||
PCI_EPC_IRQ_UNKNOWN,
|
PCI_EPC_IRQ_UNKNOWN,
|
||||||
PCI_EPC_IRQ_LEGACY,
|
PCI_EPC_IRQ_LEGACY,
|
||||||
@ -20,6 +26,19 @@ enum pci_epc_irq_type {
|
|||||||
PCI_EPC_IRQ_MSIX,
|
PCI_EPC_IRQ_MSIX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static inline const char *
|
||||||
|
pci_epc_interface_string(enum pci_epc_interface_type type)
|
||||||
|
{
|
||||||
|
switch (type) {
|
||||||
|
case PRIMARY_INTERFACE:
|
||||||
|
return "primary";
|
||||||
|
case SECONDARY_INTERFACE:
|
||||||
|
return "secondary";
|
||||||
|
default:
|
||||||
|
return "UNKNOWN interface";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct pci_epc_ops - set of function pointers for performing EPC operations
|
* struct pci_epc_ops - set of function pointers for performing EPC operations
|
||||||
* @write_header: ops to populate configuration space header
|
* @write_header: ops to populate configuration space header
|
||||||
@ -36,6 +55,7 @@ enum pci_epc_irq_type {
|
|||||||
* @get_msix: ops to get the number of MSI-X interrupts allocated by the RC
|
* @get_msix: ops to get the number of MSI-X interrupts allocated by the RC
|
||||||
* from the MSI-X capability register
|
* from the MSI-X capability register
|
||||||
* @raise_irq: ops to raise a legacy, MSI or MSI-X interrupt
|
* @raise_irq: ops to raise a legacy, MSI or MSI-X interrupt
|
||||||
|
* @map_msi_irq: ops to map physical address to MSI address and return MSI data
|
||||||
* @start: ops to start the PCI link
|
* @start: ops to start the PCI link
|
||||||
* @stop: ops to stop the PCI link
|
* @stop: ops to stop the PCI link
|
||||||
* @owner: the module owner containing the ops
|
* @owner: the module owner containing the ops
|
||||||
@ -58,6 +78,10 @@ struct pci_epc_ops {
|
|||||||
int (*get_msix)(struct pci_epc *epc, u8 func_no);
|
int (*get_msix)(struct pci_epc *epc, u8 func_no);
|
||||||
int (*raise_irq)(struct pci_epc *epc, u8 func_no,
|
int (*raise_irq)(struct pci_epc *epc, u8 func_no,
|
||||||
enum pci_epc_irq_type type, u16 interrupt_num);
|
enum pci_epc_irq_type type, u16 interrupt_num);
|
||||||
|
int (*map_msi_irq)(struct pci_epc *epc, u8 func_no,
|
||||||
|
phys_addr_t phys_addr, u8 interrupt_num,
|
||||||
|
u32 entry_size, u32 *msi_data,
|
||||||
|
u32 *msi_addr_offset);
|
||||||
int (*start)(struct pci_epc *epc);
|
int (*start)(struct pci_epc *epc);
|
||||||
void (*stop)(struct pci_epc *epc);
|
void (*stop)(struct pci_epc *epc);
|
||||||
const struct pci_epc_features* (*get_features)(struct pci_epc *epc,
|
const struct pci_epc_features* (*get_features)(struct pci_epc *epc,
|
||||||
@ -175,10 +199,12 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
|
|||||||
struct module *owner);
|
struct module *owner);
|
||||||
void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc);
|
void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc);
|
||||||
void pci_epc_destroy(struct pci_epc *epc);
|
void pci_epc_destroy(struct pci_epc *epc);
|
||||||
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf);
|
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf,
|
||||||
|
enum pci_epc_interface_type type);
|
||||||
void pci_epc_linkup(struct pci_epc *epc);
|
void pci_epc_linkup(struct pci_epc *epc);
|
||||||
void pci_epc_init_notify(struct pci_epc *epc);
|
void pci_epc_init_notify(struct pci_epc *epc);
|
||||||
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf);
|
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
|
||||||
|
enum pci_epc_interface_type type);
|
||||||
int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
|
int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
|
||||||
struct pci_epf_header *hdr);
|
struct pci_epf_header *hdr);
|
||||||
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
|
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
|
||||||
@ -195,14 +221,19 @@ int pci_epc_get_msi(struct pci_epc *epc, u8 func_no);
|
|||||||
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
|
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
|
||||||
enum pci_barno, u32 offset);
|
enum pci_barno, u32 offset);
|
||||||
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no);
|
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no);
|
||||||
|
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no,
|
||||||
|
phys_addr_t phys_addr, u8 interrupt_num,
|
||||||
|
u32 entry_size, u32 *msi_data, u32 *msi_addr_offset);
|
||||||
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
||||||
enum pci_epc_irq_type type, u16 interrupt_num);
|
enum pci_epc_irq_type type, u16 interrupt_num);
|
||||||
int pci_epc_start(struct pci_epc *epc);
|
int pci_epc_start(struct pci_epc *epc);
|
||||||
void pci_epc_stop(struct pci_epc *epc);
|
void pci_epc_stop(struct pci_epc *epc);
|
||||||
const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
|
const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
|
||||||
u8 func_no);
|
u8 func_no);
|
||||||
unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
|
enum pci_barno
|
||||||
*epc_features);
|
pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features);
|
||||||
|
enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
|
||||||
|
*epc_features, enum pci_barno bar);
|
||||||
struct pci_epc *pci_epc_get(const char *epc_name);
|
struct pci_epc *pci_epc_get(const char *epc_name);
|
||||||
void pci_epc_put(struct pci_epc *epc);
|
void pci_epc_put(struct pci_epc *epc);
|
||||||
|
|
||||||
|
@ -9,11 +9,13 @@
|
|||||||
#ifndef __LINUX_PCI_EPF_H
|
#ifndef __LINUX_PCI_EPF_H
|
||||||
#define __LINUX_PCI_EPF_H
|
#define __LINUX_PCI_EPF_H
|
||||||
|
|
||||||
|
#include <linux/configfs.h>
|
||||||
#include <linux/device.h>
|
#include <linux/device.h>
|
||||||
#include <linux/mod_devicetable.h>
|
#include <linux/mod_devicetable.h>
|
||||||
#include <linux/pci.h>
|
#include <linux/pci.h>
|
||||||
|
|
||||||
struct pci_epf;
|
struct pci_epf;
|
||||||
|
enum pci_epc_interface_type;
|
||||||
|
|
||||||
enum pci_notify_event {
|
enum pci_notify_event {
|
||||||
CORE_INIT,
|
CORE_INIT,
|
||||||
@ -21,6 +23,7 @@ enum pci_notify_event {
|
|||||||
};
|
};
|
||||||
|
|
||||||
enum pci_barno {
|
enum pci_barno {
|
||||||
|
NO_BAR = -1,
|
||||||
BAR_0,
|
BAR_0,
|
||||||
BAR_1,
|
BAR_1,
|
||||||
BAR_2,
|
BAR_2,
|
||||||
@ -60,10 +63,13 @@ struct pci_epf_header {
|
|||||||
* @bind: ops to perform when a EPC device has been bound to EPF device
|
* @bind: ops to perform when a EPC device has been bound to EPF device
|
||||||
* @unbind: ops to perform when a binding has been lost between a EPC device
|
* @unbind: ops to perform when a binding has been lost between a EPC device
|
||||||
* and EPF device
|
* and EPF device
|
||||||
|
* @add_cfs: ops to initialize function specific configfs attributes
|
||||||
*/
|
*/
|
||||||
struct pci_epf_ops {
|
struct pci_epf_ops {
|
||||||
int (*bind)(struct pci_epf *epf);
|
int (*bind)(struct pci_epf *epf);
|
||||||
void (*unbind)(struct pci_epf *epf);
|
void (*unbind)(struct pci_epf *epf);
|
||||||
|
struct config_group *(*add_cfs)(struct pci_epf *epf,
|
||||||
|
struct config_group *group);
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -118,6 +124,12 @@ struct pci_epf_bar {
|
|||||||
* @list: to add pci_epf as a list of PCI endpoint functions to pci_epc
|
* @list: to add pci_epf as a list of PCI endpoint functions to pci_epc
|
||||||
* @nb: notifier block to notify EPF of any EPC events (like linkup)
|
* @nb: notifier block to notify EPF of any EPC events (like linkup)
|
||||||
* @lock: mutex to protect pci_epf_ops
|
* @lock: mutex to protect pci_epf_ops
|
||||||
|
* @sec_epc: the secondary EPC device to which this EPF device is bound
|
||||||
|
* @sec_epc_list: to add pci_epf as list of PCI endpoint functions to secondary
|
||||||
|
* EPC device
|
||||||
|
* @sec_epc_bar: represents the BAR of EPF device associated with secondary EPC
|
||||||
|
* @sec_epc_func_no: unique (physical) function number within the secondary EPC
|
||||||
|
* @group: configfs group associated with the EPF device
|
||||||
*/
|
*/
|
||||||
struct pci_epf {
|
struct pci_epf {
|
||||||
struct device dev;
|
struct device dev;
|
||||||
@ -134,6 +146,13 @@ struct pci_epf {
|
|||||||
struct notifier_block nb;
|
struct notifier_block nb;
|
||||||
/* mutex to protect against concurrent access of pci_epf_ops */
|
/* mutex to protect against concurrent access of pci_epf_ops */
|
||||||
struct mutex lock;
|
struct mutex lock;
|
||||||
|
|
||||||
|
/* Below members are to attach secondary EPC to an endpoint function */
|
||||||
|
struct pci_epc *sec_epc;
|
||||||
|
struct list_head sec_epc_list;
|
||||||
|
struct pci_epf_bar sec_epc_bar[6];
|
||||||
|
u8 sec_epc_func_no;
|
||||||
|
struct config_group *group;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -164,16 +183,17 @@ static inline void *epf_get_drvdata(struct pci_epf *epf)
|
|||||||
return dev_get_drvdata(&epf->dev);
|
return dev_get_drvdata(&epf->dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
const struct pci_epf_device_id *
|
|
||||||
pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf);
|
|
||||||
struct pci_epf *pci_epf_create(const char *name);
|
struct pci_epf *pci_epf_create(const char *name);
|
||||||
void pci_epf_destroy(struct pci_epf *epf);
|
void pci_epf_destroy(struct pci_epf *epf);
|
||||||
int __pci_epf_register_driver(struct pci_epf_driver *driver,
|
int __pci_epf_register_driver(struct pci_epf_driver *driver,
|
||||||
struct module *owner);
|
struct module *owner);
|
||||||
void pci_epf_unregister_driver(struct pci_epf_driver *driver);
|
void pci_epf_unregister_driver(struct pci_epf_driver *driver);
|
||||||
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
||||||
size_t align);
|
size_t align, enum pci_epc_interface_type type);
|
||||||
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar);
|
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
|
||||||
|
enum pci_epc_interface_type type);
|
||||||
int pci_epf_bind(struct pci_epf *epf);
|
int pci_epf_bind(struct pci_epf *epf);
|
||||||
void pci_epf_unbind(struct pci_epf *epf);
|
void pci_epf_unbind(struct pci_epf *epf);
|
||||||
|
struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
|
||||||
|
struct config_group *group);
|
||||||
#endif /* __LINUX_PCI_EPF_H */
|
#endif /* __LINUX_PCI_EPF_H */
|
||||||
|
@ -882,6 +882,7 @@
|
|||||||
#define PCI_DEVICE_ID_TI_X620 0xac8d
|
#define PCI_DEVICE_ID_TI_X620 0xac8d
|
||||||
#define PCI_DEVICE_ID_TI_X420 0xac8e
|
#define PCI_DEVICE_ID_TI_X420 0xac8e
|
||||||
#define PCI_DEVICE_ID_TI_XX20_FM 0xac8f
|
#define PCI_DEVICE_ID_TI_XX20_FM 0xac8f
|
||||||
|
#define PCI_DEVICE_ID_TI_J721E 0xb00d
|
||||||
#define PCI_DEVICE_ID_TI_DRA74x 0xb500
|
#define PCI_DEVICE_ID_TI_DRA74x 0xb500
|
||||||
#define PCI_DEVICE_ID_TI_DRA72x 0xb501
|
#define PCI_DEVICE_ID_TI_DRA72x 0xb501
|
||||||
|
|
||||||
@ -2589,6 +2590,8 @@
|
|||||||
|
|
||||||
#define PCI_VENDOR_ID_REDHAT 0x1b36
|
#define PCI_VENDOR_ID_REDHAT 0x1b36
|
||||||
|
|
||||||
|
#define PCI_VENDOR_ID_SILICOM_DENMARK 0x1c2c
|
||||||
|
|
||||||
#define PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS 0x1c36
|
#define PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS 0x1c36
|
||||||
|
|
||||||
#define PCI_VENDOR_ID_CIRCUITCO 0x1cc8
|
#define PCI_VENDOR_ID_CIRCUITCO 0x1cc8
|
||||||
|
@ -28,6 +28,8 @@ static DEFINE_MUTEX(io_range_mutex);
|
|||||||
* @new_range: pointer to the IO range to be registered.
|
* @new_range: pointer to the IO range to be registered.
|
||||||
*
|
*
|
||||||
* Returns 0 on success, the error code in case of failure.
|
* Returns 0 on success, the error code in case of failure.
|
||||||
|
* If the range already exists, -EEXIST will be returned, which should be
|
||||||
|
* considered a success.
|
||||||
*
|
*
|
||||||
* Register a new IO range node in the IO range list.
|
* Register a new IO range node in the IO range list.
|
||||||
*/
|
*/
|
||||||
@ -51,6 +53,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range)
|
|||||||
list_for_each_entry(range, &io_range_list, list) {
|
list_for_each_entry(range, &io_range_list, list) {
|
||||||
if (range->fwnode == new_range->fwnode) {
|
if (range->fwnode == new_range->fwnode) {
|
||||||
/* range already there */
|
/* range already there */
|
||||||
|
ret = -EEXIST;
|
||||||
goto end_register;
|
goto end_register;
|
||||||
}
|
}
|
||||||
if (range->flags == LOGIC_PIO_CPU_MMIO &&
|
if (range->flags == LOGIC_PIO_CPU_MMIO &&
|
||||||
|
Loading…
Reference in New Issue
Block a user