pci-v6.7-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmVBaU8UHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwEdxAAo++s98+ZaaTdUuoV0Zpft1fuY6Yr
 mR80jUDxjHDbcI1G4iNVUSWG6pGIdlURnrBp5kU74FV9R2Ps3Fl49XQUHowE0HfH
 D/qmihiJQdnMsQKwzw3XGoTSINrDcF6nLafl9brBItVkgjNxfxSEbnweJMBf+Boc
 rpRXHzxbVHVjwwhBLODF2Wt/8sQ24w9c+wcQkpo7im8ZZReoigNMKgEa4J7tLlqA
 vTyPR/K6QeU8IBUk2ObCY3GeYrVuqi82eRK3Uwzu7IkQwA9orE416Okvq3Z026/h
 TUAivtrcygHaFRdGNvzspYLbc2hd2sEXF+KKKb6GNAjxuDWUhVQW4ObY4FgFkZ65
 Gqz/05D6c1dqTS3vTxp3nZYpvPEbNnO1RaGRL4h0/mbU+QSPSlHXWd9Lfg6noVVd
 3O+CcstQK8RzMiiWLeyctRPV5XIf7nGVQTJW5aCLajlHeJWcvygNpNG4N57j/hXQ
 gyEHrz3idXXHXkBKmyWZfre6YpLkxZtKyONZDHWI/AVhU0TgRdJWmqpRfC1kVVUe
 IUWBRcPUF4/r3jEu6t10N/aDWQN1uQzIsJNnCrKzAddPDTTYQJk8VVzKPo8SVxPD
 X+OjEMgBB/fXUfkJ7IMwgYnWaFJhxthrs6/3j1UqRvGYRoulE4NdWwJDky9UYIHd
 qV3dzuAxC/cpv08=
 =G//C
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Use acpi_evaluate_dsm_typed() instead of open-coding _DSM
     evaluation to learn device characteristics (Andy Shevchenko)

   - Tidy multi-function header checks using new PCI_HEADER_TYPE_MASK
     definition (Ilpo Järvinen)

   - Simplify config access error checking in various drivers (Ilpo
     Järvinen)

   - Use pcie_capability_clear_word() (not
     pcie_capability_clear_and_set_word()) when only clearing (Ilpo
     Järvinen)

   - Add pci_get_base_class() to simplify finding devices using base
     class only (ignoring subclass and programming interface) (Sui
     Jingfeng)

   - Add pci_is_vga(), which includes ancient PCI_CLASS_NOT_DEFINED_VGA
     devices from before the Class Code was added to PCI (Sui Jingfeng)

   - Use pci_is_vga() for vgaarb, sysfs "boot_vga", virtio, qxl to
     include ancient VGA devices (Sui Jingfeng)

  Resource management:

   - Make pci_assign_unassigned_resources() non-init because sparc uses
     it after init (Randy Dunlap)

  Driver binding:

   - Retain .remove() and .probe() callbacks (previously __init) because
     sysfs may cause them to be called later (Uwe Kleine-König)

   - Prevent xHCI driver from claiming AMD VanGogh USB3 DRD device, so
     it can be claimed by dwc3 instead (Vicki Pfau)

  PCI device hotplug:

   - Add Ampere Altra Attention Indicator extension driver for acpiphp
     (D Scott Phillips)

  Power management:

   - Quirk VideoPropulsion Torrent QN16e with longer delay after reset
     (Lukas Wunner)

   - Prevent users from overriding drivers that say we shouldn't use
     D3cold (Lukas Wunner)

   - Avoid PME from D3hot/D3cold for AMD Rembrandt and Phoenix USB4
     because wakeup interrupts from those states don't work if amd-pmc
     has put the platform in a hardware sleep state (Mario Limonciello)

  IOMMU:

   - Disable ATS for Intel IPU E2000 devices with invalidation message
     endianness erratum (Bartosz Pawlowski)

  Error handling:

   - Factor out interrupt enable/disable into helpers (Kai-Heng Feng)

  Peer-to-peer DMA:

   - Fix flexible-array usage in struct pci_p2pdma_pagemap in case we
     ever use pagemaps with multiple entries (Gustavo A. R. Silva)

  ASPM:

   - Revert a change that broke when drivers disabled L1 and users later
     enabled an L1.x substate via sysfs, and fix a similar issue when
     users disabled L1 via sysfs (Heiner Kallweit)

  Endpoint framework:

   - Fix double free in __pci_epc_create() (Dan Carpenter)

   - Use IS_ERR_OR_NULL() to simplify endpoint core (Ruan Jinjie)

  Cadence PCIe controller driver:

   - Drop unused "is_rc" member (Li Chen)

  Freescale Layerscape PCIe controller driver:

   - Enable 64-bit addressing in endpoint mode (Guanhua Gao)

  Intel VMD host bridge driver:

   - Fix multi-function header check (Ilpo Järvinen)

  Microsoft Hyper-V host bridge driver:

   - Annotate struct hv_dr_state with __counted_by (Kees Cook)

  NVIDIA Tegra194 PCIe controller driver:

   - Drop setting of LNKCAP_MLW (max link width) since dw_pcie_setup()
     already does this via dw_pcie_link_set_max_link_width() (Yoshihiro
     Shimoda)

  Qualcomm PCIe controller driver:

   - Use PCIE_SPEED2MBS_ENC() to simplify encoding of link speed
     (Manivannan Sadhasivam)

   - Add a .write_dbi2() callback so DBI2 register writes, e.g., for
     setting the BAR size, work correctly (Manivannan Sadhasivam)

   - Enable ASPM for platforms that use 1.9.0 ops, because the PCI core
     doesn't enable ASPM states that haven't been enabled by the
     firmware (Manivannan Sadhasivam)

  Renesas R-Car Gen4 PCIe controller driver:

   - Add DesignWare core support (set max link width, EDMA_UNROLL flag,
     .pre_init(), .deinit(), etc) for use by R-Car Gen4 driver
     (Yoshihiro Shimoda)

   - Add driver and DT schema for DesignWare-based Renesas R-Car Gen4
     controller in both host and endpoint mode (Yoshihiro Shimoda)

  Xilinx NWL PCIe controller driver:

   - Update ECAM size to support 256 buses (Thippeswamy Havalige)

   - Stop setting bridge primary/secondary/subordinate bus numbers,
     since PCI core does this (Thippeswamy Havalige)

  Xilinx XDMA controller driver:

   - Add driver and DT schema for Zynq UltraScale+ MPSoCs devices with
     Xilinx XDMA Soft IP (Thippeswamy Havalige)

  Miscellaneous:

   - Use FIELD_GET()/FIELD_PREP() to simplify and reduce use of _SHIFT
     macros (Ilpo Järvinen, Bjorn Helgaas)

   - Remove logic_outb(), _outw(), outl() duplicate declarations (John
     Sanpe)

   - Replace unnecessary UTF-8 in Kconfig help text because menuconfig
     doesn't render it correctly (Liu Song)"

* tag 'pci-v6.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (102 commits)
  PCI: qcom-ep: Add dedicated callback for writing to DBI2 registers
  PCI: Simplify pcie_capability_clear_and_set_word() to ..._clear_word()
  PCI: endpoint: Fix double free in __pci_epc_create()
  PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver
  dt-bindings: PCI: xilinx-xdma: Add schemas for Xilinx XDMA PCIe Root Port Bridge
  PCI: xilinx-cpm: Move IRQ definitions to a common header
  PCI: xilinx-nwl: Modify ECAM size to enable support for 256 buses
  PCI: xilinx-nwl: Rename the NWL_ECAM_VALUE_DEFAULT macro
  dt-bindings: PCI: xilinx-nwl: Modify ECAM size in the DT example
  PCI: xilinx-nwl: Remove redundant code that sets Type 1 header fields
  PCI: hotplug: Add Ampere Altra Attention Indicator extension driver
  PCI/AER: Factor out interrupt toggling into helpers
  PCI: acpiphp: Allow built-in drivers for Attention Indicators
  PCI/portdrv: Use FIELD_GET()
  PCI/VC: Use FIELD_GET()
  PCI/PTM: Use FIELD_GET()
  PCI/PME: Use FIELD_GET()
  PCI/ATS: Use FIELD_GET()
  PCI/ATS: Show PASID Capability register width in bitmasks
  PCI/ASPM: Fix L1 substate handling in aspm_attr_store_common()
  ...
This commit is contained in:
Linus Torvalds 2023-11-02 14:05:18 -10:00
commit 27beb3ca34
88 changed files with 2613 additions and 511 deletions

View File

@ -0,0 +1,115 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2022-2023 Renesas Electronics Corp.
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/rcar-gen4-pci-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas R-Car Gen4 PCIe Endpoint
maintainers:
- Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
allOf:
- $ref: snps,dw-pcie-ep.yaml#
properties:
compatible:
items:
- const: renesas,r8a779f0-pcie-ep # R-Car S4-8
- const: renesas,rcar-gen4-pcie-ep # R-Car Gen4
reg:
maxItems: 7
reg-names:
items:
- const: dbi
- const: dbi2
- const: atu
- const: dma
- const: app
- const: phy
- const: addr_space
interrupts:
maxItems: 3
interrupt-names:
items:
- const: dma
- const: sft_ce
- const: app
clocks:
maxItems: 2
clock-names:
items:
- const: core
- const: ref
power-domains:
maxItems: 1
resets:
maxItems: 1
reset-names:
items:
- const: pwr
max-link-speed:
maximum: 4
num-lanes:
maximum: 4
max-functions:
maximum: 2
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
- clocks
- clock-names
- power-domains
- resets
- reset-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/r8a779f0-cpg-mssr.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/r8a779f0-sysc.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie0_ep: pcie-ep@e65d0000 {
compatible = "renesas,r8a779f0-pcie-ep", "renesas,rcar-gen4-pcie-ep";
reg = <0 0xe65d0000 0 0x2000>, <0 0xe65d2000 0 0x1000>,
<0 0xe65d3000 0 0x2000>, <0 0xe65d5000 0 0x1200>,
<0 0xe65d6200 0 0x0e00>, <0 0xe65d7000 0 0x0400>,
<0 0xfe000000 0 0x400000>;
reg-names = "dbi", "dbi2", "atu", "dma", "app", "phy", "addr_space";
interrupts = <GIC_SPI 417 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 422 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "dma", "sft_ce", "app";
clocks = <&cpg CPG_MOD 624>, <&pcie0_clkref>;
clock-names = "core", "ref";
power-domains = <&sysc R8A779F0_PD_ALWAYS_ON>;
resets = <&cpg 624>;
reset-names = "pwr";
max-link-speed = <4>;
num-lanes = <2>;
max-functions = /bits/ 8 <2>;
};
};

View File

@ -0,0 +1,127 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2022-2023 Renesas Electronics Corp.
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/rcar-gen4-pci-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas R-Car Gen4 PCIe Host
maintainers:
- Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
allOf:
- $ref: snps,dw-pcie.yaml#
properties:
compatible:
items:
- const: renesas,r8a779f0-pcie # R-Car S4-8
- const: renesas,rcar-gen4-pcie # R-Car Gen4
reg:
maxItems: 7
reg-names:
items:
- const: dbi
- const: dbi2
- const: atu
- const: dma
- const: app
- const: phy
- const: config
interrupts:
maxItems: 4
interrupt-names:
items:
- const: msi
- const: dma
- const: sft_ce
- const: app
clocks:
maxItems: 2
clock-names:
items:
- const: core
- const: ref
power-domains:
maxItems: 1
resets:
maxItems: 1
reset-names:
items:
- const: pwr
max-link-speed:
maximum: 4
num-lanes:
maximum: 4
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
- clocks
- clock-names
- power-domains
- resets
- reset-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/r8a779f0-cpg-mssr.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/r8a779f0-sysc.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie: pcie@e65d0000 {
compatible = "renesas,r8a779f0-pcie", "renesas,rcar-gen4-pcie";
reg = <0 0xe65d0000 0 0x1000>, <0 0xe65d2000 0 0x0800>,
<0 0xe65d3000 0 0x2000>, <0 0xe65d5000 0 0x1200>,
<0 0xe65d6200 0 0x0e00>, <0 0xe65d7000 0 0x0400>,
<0 0xfe000000 0 0x400000>;
reg-names = "dbi", "dbi2", "atu", "dma", "app", "phy", "config";
interrupts = <GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 417 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 422 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi", "dma", "sft_ce", "app";
clocks = <&cpg CPG_MOD 624>, <&pcie0_clkref>;
clock-names = "core", "ref";
power-domains = <&sysc R8A779F0_PD_ALWAYS_ON>;
resets = <&cpg 624>;
reset-names = "pwr";
max-link-speed = <4>;
num-lanes = <2>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
device_type = "pci";
ranges = <0x01000000 0 0x00000000 0 0xfe000000 0 0x00400000>,
<0x02000000 0 0x30000000 0 0x30000000 0 0x10000000>;
dma-ranges = <0x42000000 0 0x00000000 0 0x00000000 1 0x00000000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>;
snps,enable-cdm-check;
};
};

View File

@ -33,11 +33,11 @@ properties:
specific for each activated function, while the rest of the sub-spaces specific for each activated function, while the rest of the sub-spaces
are common for all of them (if there are more than one). are common for all of them (if there are more than one).
minItems: 2 minItems: 2
maxItems: 6 maxItems: 7
reg-names: reg-names:
minItems: 2 minItems: 2
maxItems: 6 maxItems: 7
interrupts: interrupts:
description: description:

View File

@ -33,11 +33,11 @@ properties:
normal controller functioning. iATU memory IO region is also required normal controller functioning. iATU memory IO region is also required
if the space is unrolled (IP-core version >= 4.80a). if the space is unrolled (IP-core version >= 4.80a).
minItems: 2 minItems: 2
maxItems: 5 maxItems: 7
reg-names: reg-names:
minItems: 2 minItems: 2
maxItems: 5 maxItems: 7
items: items:
oneOf: oneOf:
- description: - description:

View File

@ -42,11 +42,11 @@ properties:
are required for the normal controller work. iATU memory IO region is are required for the normal controller work. iATU memory IO region is
also required if the space is unrolled (IP-core version >= 4.80a). also required if the space is unrolled (IP-core version >= 4.80a).
minItems: 2 minItems: 2
maxItems: 5 maxItems: 7
reg-names: reg-names:
minItems: 2 minItems: 2
maxItems: 5 maxItems: 7
items: items:
oneOf: oneOf:
- description: - description:

View File

@ -118,7 +118,7 @@ examples:
compatible = "xlnx,nwl-pcie-2.11"; compatible = "xlnx,nwl-pcie-2.11";
reg = <0x0 0xfd0e0000 0x0 0x1000>, reg = <0x0 0xfd0e0000 0x0 0x1000>,
<0x0 0xfd480000 0x0 0x1000>, <0x0 0xfd480000 0x0 0x1000>,
<0x80 0x00000000 0x0 0x1000000>; <0x80 0x00000000 0x0 0x10000000>;
reg-names = "breg", "pcireg", "cfg"; reg-names = "breg", "pcireg", "cfg";
ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>, ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>,
<0x43000000 0x00000006 0x0 0x00000006 0x0 0x00000002 0x0>; <0x43000000 0x00000006 0x0 0x00000006 0x0 0x00000002 0x0>;

View File

@ -0,0 +1,114 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/xlnx,xdma-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Xilinx XDMA PL PCIe Root Port Bridge
maintainers:
- Thippeswamy Havalige <thippeswamy.havalige@amd.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
properties:
compatible:
const: xlnx,xdma-host-3.00
reg:
maxItems: 1
ranges:
maxItems: 2
interrupts:
items:
- description: interrupt asserted when miscellaneous interrupt is received.
- description: msi0 interrupt asserted when an MSI is received.
- description: msi1 interrupt asserted when an MSI is received.
interrupt-names:
items:
- const: misc
- const: msi0
- const: msi1
interrupt-map-mask:
items:
- const: 0
- const: 0
- const: 0
- const: 7
interrupt-map:
maxItems: 4
"#interrupt-cells":
const: 1
interrupt-controller:
description: identifies the node as an interrupt controller
type: object
properties:
interrupt-controller: true
"#address-cells":
const: 0
"#interrupt-cells":
const: 1
required:
- interrupt-controller
- "#address-cells"
- "#interrupt-cells"
additionalProperties: false
required:
- compatible
- reg
- ranges
- interrupts
- interrupt-map
- interrupt-map-mask
- "#interrupt-cells"
- interrupt-controller
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@a0000000 {
compatible = "xlnx,xdma-host-3.00";
reg = <0x0 0xa0000000 0x0 0x10000000>;
ranges = <0x2000000 0x0 0xb0000000 0x0 0xb0000000 0x0 0x1000000>,
<0x43000000 0x5 0x0 0x5 0x0 0x0 0x1000000>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
device_type = "pci";
interrupt-parent = <&gic>;
interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 91 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "misc", "msi0", "msi1";
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0 0 0 1 &pcie_intc_0 0>,
<0 0 0 2 &pcie_intc_0 1>,
<0 0 0 3 &pcie_intc_0 2>,
<0 0 0 4 &pcie_intc_0 3>;
pcie_intc_0: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
};
};
};

View File

@ -16526,6 +16526,7 @@ L: linux-renesas-soc@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/*rcar* F: Documentation/devicetree/bindings/pci/*rcar*
F: drivers/pci/controller/*rcar* F: drivers/pci/controller/*rcar*
F: drivers/pci/controller/dwc/*rcar*
PCI DRIVER FOR SAMSUNG EXYNOS PCI DRIVER FOR SAMSUNG EXYNOS
M: Jingoo Han <jingoohan1@gmail.com> M: Jingoo Han <jingoohan1@gmail.com>

View File

@ -183,16 +183,17 @@ miata_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
the 2nd 8259 controller. So we have to check for it first. */ the 2nd 8259 controller. So we have to check for it first. */
if((slot == 7) && (PCI_FUNC(dev->devfn) == 3)) { if((slot == 7) && (PCI_FUNC(dev->devfn) == 3)) {
u8 irq=0;
struct pci_dev *pdev = pci_get_slot(dev->bus, dev->devfn & ~7); struct pci_dev *pdev = pci_get_slot(dev->bus, dev->devfn & ~7);
if(pdev == NULL || pci_read_config_byte(pdev, 0x40,&irq) != PCIBIOS_SUCCESSFUL) { u8 irq = 0;
pci_dev_put(pdev); int ret;
if (!pdev)
return -1; return -1;
}
else { ret = pci_read_config_byte(pdev, 0x40, &irq);
pci_dev_put(pdev); pci_dev_put(pdev);
return irq;
} return ret == PCIBIOS_SUCCESSFUL ? irq : -1;
} }
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;

View File

@ -50,20 +50,21 @@ int __init pci_is_66mhz_capable(struct pci_channel *hose,
int top_bus, int current_bus) int top_bus, int current_bus)
{ {
u32 pci_devfn; u32 pci_devfn;
unsigned short vid; u16 vid;
int cap66 = -1; int cap66 = -1;
u16 stat; u16 stat;
int ret;
pr_info("PCI: Checking 66MHz capabilities...\n"); pr_info("PCI: Checking 66MHz capabilities...\n");
for (pci_devfn = 0; pci_devfn < 0xff; pci_devfn++) { for (pci_devfn = 0; pci_devfn < 0xff; pci_devfn++) {
if (PCI_FUNC(pci_devfn)) if (PCI_FUNC(pci_devfn))
continue; continue;
if (early_read_config_word(hose, top_bus, current_bus, ret = early_read_config_word(hose, top_bus, current_bus,
pci_devfn, PCI_VENDOR_ID, &vid) != pci_devfn, PCI_VENDOR_ID, &vid);
PCIBIOS_SUCCESSFUL) if (ret != PCIBIOS_SUCCESSFUL)
continue; continue;
if (vid == 0xffff) if (PCI_POSSIBLE_ERROR(vid))
continue; continue;
/* check 66MHz capability */ /* check 66MHz capability */

View File

@ -3,9 +3,11 @@
* Exceptions for specific devices. Usually work-arounds for fatal design flaws. * Exceptions for specific devices. Usually work-arounds for fatal design flaws.
*/ */
#include <linux/bitfield.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/suspend.h>
#include <linux/vgaarb.h> #include <linux/vgaarb.h>
#include <asm/amd_nb.h> #include <asm/amd_nb.h>
#include <asm/hpet.h> #include <asm/hpet.h>
@ -904,3 +906,60 @@ static void chromeos_fixup_apl_pci_l1ss_capability(struct pci_dev *dev)
} }
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x5ad6, chromeos_save_apl_pci_l1ss_capability); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x5ad6, chromeos_save_apl_pci_l1ss_capability);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x5ad6, chromeos_fixup_apl_pci_l1ss_capability); DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x5ad6, chromeos_fixup_apl_pci_l1ss_capability);
#ifdef CONFIG_SUSPEND
/*
* Root Ports on some AMD SoCs advertise PME_Support for D3hot and D3cold, but
* if the SoC is put into a hardware sleep state by the amd-pmc driver, the
* Root Ports don't generate wakeup interrupts for USB devices.
*
* When suspending, remove D3hot and D3cold from the PME_Support advertised
* by the Root Port so we don't use those states if we're expecting wakeup
* interrupts. Restore the advertised PME_Support when resuming.
*/
static void amd_rp_pme_suspend(struct pci_dev *dev)
{
struct pci_dev *rp;
/*
* PM_SUSPEND_ON means we're doing runtime suspend, which means
* amd-pmc will not be involved so PMEs during D3 work as advertised.
*
* The PMEs *do* work if amd-pmc doesn't put the SoC in the hardware
* sleep state, but we assume amd-pmc is always present.
*/
if (pm_suspend_target_state == PM_SUSPEND_ON)
return;
rp = pcie_find_root_port(dev);
if (!rp->pm_cap)
return;
rp->pme_support &= ~((PCI_PM_CAP_PME_D3hot|PCI_PM_CAP_PME_D3cold) >>
PCI_PM_CAP_PME_SHIFT);
dev_info_once(&rp->dev, "quirk: disabling D3cold for suspend\n");
}
static void amd_rp_pme_resume(struct pci_dev *dev)
{
struct pci_dev *rp;
u16 pmc;
rp = pcie_find_root_port(dev);
if (!rp->pm_cap)
return;
pci_read_config_word(rp, rp->pm_cap + PCI_PM_PMC, &pmc);
rp->pme_support = FIELD_GET(PCI_PM_CAP_PME_MASK, pmc);
}
/* Rembrandt (yellow_carp) */
DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x162e, amd_rp_pme_suspend);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x162e, amd_rp_pme_resume);
DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x162f, amd_rp_pme_suspend);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x162f, amd_rp_pme_resume);
/* Phoenix (pink_sardine) */
DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_suspend);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_resume);
DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_suspend);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_resume);
#endif /* CONFIG_SUSPEND */

View File

@ -2291,19 +2291,21 @@ static int get_esi(struct atm_dev *dev)
static int reset_sar(struct atm_dev *dev) static int reset_sar(struct atm_dev *dev)
{ {
IADEV *iadev; IADEV *iadev;
int i, error = 1; int i, error;
unsigned int pci[64]; unsigned int pci[64];
iadev = INPH_IA_DEV(dev); iadev = INPH_IA_DEV(dev);
for(i=0; i<64; i++) for (i = 0; i < 64; i++) {
if ((error = pci_read_config_dword(iadev->pci, error = pci_read_config_dword(iadev->pci, i * 4, &pci[i]);
i*4, &pci[i])) != PCIBIOS_SUCCESSFUL) if (error != PCIBIOS_SUCCESSFUL)
return error; return error;
}
writel(0, iadev->reg+IPHASE5575_EXT_RESET); writel(0, iadev->reg+IPHASE5575_EXT_RESET);
for(i=0; i<64; i++) for (i = 0; i < 64; i++) {
if ((error = pci_write_config_dword(iadev->pci, error = pci_write_config_dword(iadev->pci, i * 4, pci[i]);
i*4, pci[i])) != PCIBIOS_SUCCESSFUL) if (error != PCIBIOS_SUCCESSFUL)
return error; return error;
}
udelay(5); udelay(5);
return 0; return 0;
} }

View File

@ -1393,14 +1393,11 @@ void amdgpu_acpi_detect(void)
struct pci_dev *pdev = NULL; struct pci_dev *pdev = NULL;
int ret; int ret;
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) {
if (!atif->handle) if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) &&
amdgpu_atif_pci_probe_handle(pdev); (pdev->class != PCI_CLASS_DISPLAY_OTHER << 8))
if (!atcs->handle) continue;
amdgpu_atcs_pci_probe_handle(pdev);
}
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
if (!atif->handle) if (!atif->handle)
amdgpu_atif_pci_probe_handle(pdev); amdgpu_atif_pci_probe_handle(pdev);
if (!atcs->handle) if (!atcs->handle)

View File

@ -287,7 +287,11 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
if (adev->flags & AMD_IS_APU) if (adev->flags & AMD_IS_APU)
return false; return false;
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) {
if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) &&
(pdev->class != PCI_CLASS_DISPLAY_OTHER << 8))
continue;
dhandle = ACPI_HANDLE(&pdev->dev); dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle) if (!dhandle)
continue; continue;
@ -299,20 +303,6 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
} }
} }
if (!found) {
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
continue;
status = acpi_get_handle(dhandle, "ATRM", &atrm_handle);
if (ACPI_SUCCESS(status)) {
found = true;
break;
}
}
}
if (!found) if (!found)
return false; return false;
pci_dev_put(pdev); pci_dev_put(pdev);

View File

@ -284,14 +284,11 @@ static bool nouveau_dsm_detect(void)
printk("MXM: GUID detected in BIOS\n"); printk("MXM: GUID detected in BIOS\n");
/* now do DSM detection */ /* now do DSM detection */
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) {
vga_count++; if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) &&
(pdev->class != PCI_CLASS_DISPLAY_3D << 8))
continue;
nouveau_dsm_pci_probe(pdev, &dhandle, &has_mux, &has_optimus,
&has_optimus_flags, &has_power_resources);
}
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_3D << 8, pdev)) != NULL) {
vga_count++; vga_count++;
nouveau_dsm_pci_probe(pdev, &dhandle, &has_mux, &has_optimus, nouveau_dsm_pci_probe(pdev, &dhandle, &has_mux, &has_optimus,

View File

@ -68,11 +68,6 @@ module_param_named(num_heads, qxl_num_crtc, int, 0400);
static struct drm_driver qxl_driver; static struct drm_driver qxl_driver;
static struct pci_driver qxl_pci_driver; static struct pci_driver qxl_pci_driver;
static bool is_vga(struct pci_dev *pdev)
{
return pdev->class == PCI_CLASS_DISPLAY_VGA << 8;
}
static int static int
qxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) qxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
@ -100,7 +95,7 @@ qxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret) if (ret)
goto disable_pci; goto disable_pci;
if (is_vga(pdev) && pdev->revision < 5) { if (pci_is_vga(pdev) && pdev->revision < 5) {
ret = vga_get_interruptible(pdev, VGA_RSRC_LEGACY_IO); ret = vga_get_interruptible(pdev, VGA_RSRC_LEGACY_IO);
if (ret) { if (ret) {
DRM_ERROR("can't get legacy vga ioports\n"); DRM_ERROR("can't get legacy vga ioports\n");
@ -131,7 +126,7 @@ modeset_cleanup:
unload: unload:
qxl_device_fini(qdev); qxl_device_fini(qdev);
put_vga: put_vga:
if (is_vga(pdev) && pdev->revision < 5) if (pci_is_vga(pdev) && pdev->revision < 5)
vga_put(pdev, VGA_RSRC_LEGACY_IO); vga_put(pdev, VGA_RSRC_LEGACY_IO);
disable_pci: disable_pci:
pci_disable_device(pdev); pci_disable_device(pdev);
@ -159,7 +154,7 @@ qxl_pci_remove(struct pci_dev *pdev)
drm_dev_unregister(dev); drm_dev_unregister(dev);
drm_atomic_helper_shutdown(dev); drm_atomic_helper_shutdown(dev);
if (is_vga(pdev) && pdev->revision < 5) if (pci_is_vga(pdev) && pdev->revision < 5)
vga_put(pdev, VGA_RSRC_LEGACY_IO); vga_put(pdev, VGA_RSRC_LEGACY_IO);
} }

View File

@ -199,7 +199,11 @@ static bool radeon_atrm_get_bios(struct radeon_device *rdev)
if (rdev->flags & RADEON_IS_IGP) if (rdev->flags & RADEON_IS_IGP)
return false; return false;
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) {
if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) &&
(pdev->class != PCI_CLASS_DISPLAY_OTHER << 8))
continue;
dhandle = ACPI_HANDLE(&pdev->dev); dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle) if (!dhandle)
continue; continue;
@ -211,20 +215,6 @@ static bool radeon_atrm_get_bios(struct radeon_device *rdev)
} }
} }
if (!found) {
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
continue;
status = acpi_get_handle(dhandle, "ATRM", &atrm_handle);
if (ACPI_SUCCESS(status)) {
found = true;
break;
}
}
}
if (!found) if (!found)
return false; return false;
pci_dev_put(pdev); pci_dev_put(pdev);

View File

@ -51,7 +51,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev->dev); struct pci_dev *pdev = to_pci_dev(dev->dev);
const char *pname = dev_name(&pdev->dev); const char *pname = dev_name(&pdev->dev);
bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; bool vga = pci_is_vga(pdev);
int ret; int ret;
DRM_INFO("pci: %s detected at %s\n", DRM_INFO("pci: %s detected at %s\n",

View File

@ -81,6 +81,7 @@
#define PCI_DEVICE_ID_RENESAS_R8A774B1 0x002b #define PCI_DEVICE_ID_RENESAS_R8A774B1 0x002b
#define PCI_DEVICE_ID_RENESAS_R8A774C0 0x002d #define PCI_DEVICE_ID_RENESAS_R8A774C0 0x002d
#define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025 #define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025
#define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031
static DEFINE_IDA(pci_endpoint_test_ida); static DEFINE_IDA(pci_endpoint_test_ida);
@ -990,6 +991,9 @@ static const struct pci_device_id pci_endpoint_test_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1),}, { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1),},
{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0),}, { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0),},
{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1),}, { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1),},
{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A779F0),
.driver_data = (kernel_ulong_t)&default_data,
},
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E), { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
.driver_data = (kernel_ulong_t)&j721e_data, .driver_data = (kernel_ulong_t)&j721e_data,
}, },

View File

@ -170,7 +170,7 @@ config PCI_P2PDMA
select GENERIC_ALLOCATOR select GENERIC_ALLOCATOR
select NEED_SG_DMA_FLAGS select NEED_SG_DMA_FLAGS
help help
Enableѕ drivers to do PCI peer-to-peer transactions to and from Enables drivers to do PCI peer-to-peer transactions to and from
BARs that are exposed in other devices that are the part of BARs that are exposed in other devices that are the part of
the hierarchy where peer-to-peer DMA is guaranteed by the PCI the hierarchy where peer-to-peer DMA is guaranteed by the PCI
specification to work (ie. anything below a single PCI bridge). specification to work (ie. anything below a single PCI bridge).

View File

@ -9,6 +9,7 @@
* Copyright (C) 2011 Advanced Micro Devices, * Copyright (C) 2011 Advanced Micro Devices,
*/ */
#include <linux/bitfield.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/pci-ats.h> #include <linux/pci-ats.h>
#include <linux/pci.h> #include <linux/pci.h>
@ -480,8 +481,6 @@ int pci_pasid_features(struct pci_dev *pdev)
} }
EXPORT_SYMBOL_GPL(pci_pasid_features); EXPORT_SYMBOL_GPL(pci_pasid_features);
#define PASID_NUMBER_SHIFT 8
#define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT)
/** /**
* pci_max_pasids - Get maximum number of PASIDs supported by device * pci_max_pasids - Get maximum number of PASIDs supported by device
* @pdev: PCI device structure * @pdev: PCI device structure
@ -503,9 +502,7 @@ int pci_max_pasids(struct pci_dev *pdev)
pci_read_config_word(pdev, pasid + PCI_PASID_CAP, &supported); pci_read_config_word(pdev, pasid + PCI_PASID_CAP, &supported);
supported = (supported & PASID_NUMBER_MASK) >> PASID_NUMBER_SHIFT; return (1 << FIELD_GET(PCI_PASID_CAP_WIDTH, supported));
return (1 << supported);
} }
EXPORT_SYMBOL_GPL(pci_max_pasids); EXPORT_SYMBOL_GPL(pci_max_pasids);
#endif /* CONFIG_PCI_PASID */ #endif /* CONFIG_PCI_PASID */

View File

@ -324,6 +324,17 @@ config PCIE_XILINX
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver. Host Bridge driver.
config PCIE_XILINX_DMA_PL
bool "Xilinx DMA PL PCIe host bridge support"
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on PCI_MSI
select PCI_HOST_COMMON
help
Say 'Y' here if you want kernel support for the Xilinx PL DMA
PCIe host bridge. The controller is a Soft IP which can act as
Root Port. If your system provides Xilinx PCIe host controller
bridge DMA as Soft IP say 'Y'; if you are not sure, say 'N'.
config PCIE_XILINX_NWL config PCIE_XILINX_NWL
bool "Xilinx NWL PCIe controller" bool "Xilinx NWL PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST depends on ARCH_ZYNQMP || COMPILE_TEST

View File

@ -17,6 +17,7 @@ obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o
obj-$(CONFIG_PCIE_XILINX_DMA_PL) += pcie-xilinx-dma-pl.o
obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o
obj-$(CONFIG_PCI_XGENE) += pci-xgene.o obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o

View File

@ -3,6 +3,7 @@
// Cadence PCIe endpoint controller driver. // Cadence PCIe endpoint controller driver.
// Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com> // Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
#include <linux/bitfield.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/of.h> #include <linux/of.h>
@ -262,7 +263,7 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
* Get the Multiple Message Enable bitfield from the Message Control * Get the Multiple Message Enable bitfield from the Message Control
* register. * register.
*/ */
mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
return mme; return mme;
} }
@ -394,7 +395,7 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
return -EINVAL; return -EINVAL;
/* Get the number of enabled MSIs */ /* Get the number of enabled MSIs */
mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
msi_count = 1 << mme; msi_count = 1 << mme;
if (!interrupt_num || interrupt_num > msi_count) if (!interrupt_num || interrupt_num > msi_count)
return -EINVAL; return -EINVAL;
@ -449,7 +450,7 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
return -EINVAL; return -EINVAL;
/* Get the number of enabled MSIs */ /* Get the number of enabled MSIs */
mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
msi_count = 1 << mme; msi_count = 1 << mme;
if (!interrupt_num || interrupt_num > msi_count) if (!interrupt_num || interrupt_num > msi_count)
return -EINVAL; return -EINVAL;
@ -506,7 +507,7 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
reg = cap + PCI_MSIX_TABLE; reg = cap + PCI_MSIX_TABLE;
tbl_offset = cdns_pcie_ep_fn_readl(pcie, fn, reg); tbl_offset = cdns_pcie_ep_fn_readl(pcie, fn, reg);
bir = tbl_offset & PCI_MSIX_TABLE_BIR; bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
tbl_offset &= PCI_MSIX_TABLE_OFFSET; tbl_offset &= PCI_MSIX_TABLE_OFFSET;
msix_tbl = epf->epf_bar[bir]->addr + tbl_offset; msix_tbl = epf->epf_bar[bir]->addr + tbl_offset;

View File

@ -17,12 +17,9 @@
/** /**
* struct cdns_plat_pcie - private data for this PCIe platform driver * struct cdns_plat_pcie - private data for this PCIe platform driver
* @pcie: Cadence PCIe controller * @pcie: Cadence PCIe controller
* @is_rc: Set to 1 indicates the PCIe controller mode is Root Complex,
* if 0 it is in Endpoint mode.
*/ */
struct cdns_plat_pcie { struct cdns_plat_pcie {
struct cdns_pcie *pcie; struct cdns_pcie *pcie;
bool is_rc;
}; };
struct cdns_plat_pcie_of_data { struct cdns_plat_pcie_of_data {
@ -76,7 +73,6 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
rc->pcie.dev = dev; rc->pcie.dev = dev;
rc->pcie.ops = &cdns_plat_ops; rc->pcie.ops = &cdns_plat_ops;
cdns_plat_pcie->pcie = &rc->pcie; cdns_plat_pcie->pcie = &rc->pcie;
cdns_plat_pcie->is_rc = is_rc;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
if (ret) { if (ret) {
@ -104,7 +100,6 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
ep->pcie.dev = dev; ep->pcie.dev = dev;
ep->pcie.ops = &cdns_plat_ops; ep->pcie.ops = &cdns_plat_ops;
cdns_plat_pcie->pcie = &ep->pcie; cdns_plat_pcie->pcie = &ep->pcie;
cdns_plat_pcie->is_rc = is_rc;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
if (ret) { if (ret) {

View File

@ -286,6 +286,31 @@ config PCIE_QCOM_EP
to work in endpoint mode. The PCIe controller uses the DesignWare core to work in endpoint mode. The PCIe controller uses the DesignWare core
plus Qualcomm-specific hardware wrappers. plus Qualcomm-specific hardware wrappers.
config PCIE_RCAR_GEN4
tristate
config PCIE_RCAR_GEN4_HOST
tristate "Renesas R-Car Gen4 PCIe controller (host mode)"
depends on ARCH_RENESAS || COMPILE_TEST
depends on PCI_MSI
select PCIE_DW_HOST
select PCIE_RCAR_GEN4
help
Say Y here if you want PCIe controller (host mode) on R-Car Gen4 SoCs.
To compile this driver as a module, choose M here: the module will be
called pcie-rcar-gen4.ko. This uses the DesignWare core.
config PCIE_RCAR_GEN4_EP
tristate "Renesas R-Car Gen4 PCIe controller (endpoint mode)"
depends on ARCH_RENESAS || COMPILE_TEST
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_RCAR_GEN4
help
Say Y here if you want PCIe controller (endpoint mode) on R-Car Gen4
SoCs. To compile this driver as a module, choose M here: the module
will be called pcie-rcar-gen4.ko. This uses the DesignWare core.
config PCIE_ROCKCHIP_DW_HOST config PCIE_ROCKCHIP_DW_HOST
bool "Rockchip DesignWare PCIe controller" bool "Rockchip DesignWare PCIe controller"
select PCIE_DW select PCIE_DW

View File

@ -26,6 +26,7 @@ obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o
obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o
# The following drivers are for devices that use the generic ACPI # The following drivers are for devices that use the generic ACPI
# pci_root.c driver but don't support standard ECAM config access. # pci_root.c driver but don't support standard ECAM config access.

View File

@ -375,7 +375,7 @@ fail_probe:
return ret; return ret;
} }
static int __exit exynos_pcie_remove(struct platform_device *pdev) static int exynos_pcie_remove(struct platform_device *pdev)
{ {
struct exynos_pcie *ep = platform_get_drvdata(pdev); struct exynos_pcie *ep = platform_get_drvdata(pdev);
@ -431,7 +431,7 @@ static const struct of_device_id exynos_pcie_of_match[] = {
static struct platform_driver exynos_pcie_driver = { static struct platform_driver exynos_pcie_driver = {
.probe = exynos_pcie_probe, .probe = exynos_pcie_probe,
.remove = __exit_p(exynos_pcie_remove), .remove = exynos_pcie_remove,
.driver = { .driver = {
.name = "exynos-pcie", .name = "exynos-pcie",
.of_match_table = exynos_pcie_of_match, .of_match_table = exynos_pcie_of_match,

View File

@ -1100,7 +1100,7 @@ static const struct of_device_id ks_pcie_of_match[] = {
{ }, { },
}; };
static int __init ks_pcie_probe(struct platform_device *pdev) static int ks_pcie_probe(struct platform_device *pdev)
{ {
const struct dw_pcie_host_ops *host_ops; const struct dw_pcie_host_ops *host_ops;
const struct dw_pcie_ep_ops *ep_ops; const struct dw_pcie_ep_ops *ep_ops;
@ -1302,7 +1302,7 @@ err_link:
return ret; return ret;
} }
static int __exit ks_pcie_remove(struct platform_device *pdev) static int ks_pcie_remove(struct platform_device *pdev)
{ {
struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
struct device_link **link = ks_pcie->link; struct device_link **link = ks_pcie->link;
@ -1318,9 +1318,9 @@ static int __exit ks_pcie_remove(struct platform_device *pdev)
return 0; return 0;
} }
static struct platform_driver ks_pcie_driver __refdata = { static struct platform_driver ks_pcie_driver = {
.probe = ks_pcie_probe, .probe = ks_pcie_probe,
.remove = __exit_p(ks_pcie_remove), .remove = ks_pcie_remove,
.driver = { .driver = {
.name = "keystone-pcie", .name = "keystone-pcie",
.of_match_table = ks_pcie_of_match, .of_match_table = ks_pcie_of_match,

View File

@ -266,6 +266,8 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian"); pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian");
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
platform_set_drvdata(pdev, pcie); platform_set_drvdata(pdev, pcie);
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);

View File

@ -58,7 +58,7 @@ static bool ls_pcie_is_bridge(struct ls_pcie *pcie)
u32 header_type; u32 header_type;
header_type = ioread8(pci->dbi_base + PCI_HEADER_TYPE); header_type = ioread8(pci->dbi_base + PCI_HEADER_TYPE);
header_type &= 0x7f; header_type &= PCI_HEADER_TYPE_MASK;
return header_type == PCI_HEADER_TYPE_BRIDGE; return header_type == PCI_HEADER_TYPE_BRIDGE;
} }

View File

@ -6,6 +6,7 @@
* Author: Kishon Vijay Abraham I <kishon@ti.com> * Author: Kishon Vijay Abraham I <kishon@ti.com>
*/ */
#include <linux/bitfield.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
@ -52,21 +53,35 @@ static unsigned int dw_pcie_ep_func_select(struct dw_pcie_ep *ep, u8 func_no)
return func_offset; return func_offset;
} }
static unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep, u8 func_no)
{
unsigned int dbi2_offset = 0;
if (ep->ops->get_dbi2_offset)
dbi2_offset = ep->ops->get_dbi2_offset(ep, func_no);
else if (ep->ops->func_conf_select) /* for backward compatibility */
dbi2_offset = ep->ops->func_conf_select(ep, func_no);
return dbi2_offset;
}
static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, u8 func_no, static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, u8 func_no,
enum pci_barno bar, int flags) enum pci_barno bar, int flags)
{ {
u32 reg; unsigned int func_offset, dbi2_offset;
unsigned int func_offset = 0;
struct dw_pcie_ep *ep = &pci->ep; struct dw_pcie_ep *ep = &pci->ep;
u32 reg, reg_dbi2;
func_offset = dw_pcie_ep_func_select(ep, func_no); func_offset = dw_pcie_ep_func_select(ep, func_no);
dbi2_offset = dw_pcie_ep_get_dbi2_offset(ep, func_no);
reg = func_offset + PCI_BASE_ADDRESS_0 + (4 * bar); reg = func_offset + PCI_BASE_ADDRESS_0 + (4 * bar);
reg_dbi2 = dbi2_offset + PCI_BASE_ADDRESS_0 + (4 * bar);
dw_pcie_dbi_ro_wr_en(pci); dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi2(pci, reg, 0x0); dw_pcie_writel_dbi2(pci, reg_dbi2, 0x0);
dw_pcie_writel_dbi(pci, reg, 0x0); dw_pcie_writel_dbi(pci, reg, 0x0);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) { if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
dw_pcie_writel_dbi2(pci, reg + 4, 0x0); dw_pcie_writel_dbi2(pci, reg_dbi2 + 4, 0x0);
dw_pcie_writel_dbi(pci, reg + 4, 0x0); dw_pcie_writel_dbi(pci, reg + 4, 0x0);
} }
dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_dbi_ro_wr_dis(pci);
@ -228,16 +243,18 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{ {
struct dw_pcie_ep *ep = epc_get_drvdata(epc); struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
unsigned int func_offset, dbi2_offset;
enum pci_barno bar = epf_bar->barno; enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size; size_t size = epf_bar->size;
int flags = epf_bar->flags; int flags = epf_bar->flags;
unsigned int func_offset = 0; u32 reg, reg_dbi2;
int ret, type; int ret, type;
u32 reg;
func_offset = dw_pcie_ep_func_select(ep, func_no); func_offset = dw_pcie_ep_func_select(ep, func_no);
dbi2_offset = dw_pcie_ep_get_dbi2_offset(ep, func_no);
reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset; reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset;
reg_dbi2 = PCI_BASE_ADDRESS_0 + (4 * bar) + dbi2_offset;
if (!(flags & PCI_BASE_ADDRESS_SPACE)) if (!(flags & PCI_BASE_ADDRESS_SPACE))
type = PCIE_ATU_TYPE_MEM; type = PCIE_ATU_TYPE_MEM;
@ -253,11 +270,11 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
dw_pcie_dbi_ro_wr_en(pci); dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi2(pci, reg, lower_32_bits(size - 1)); dw_pcie_writel_dbi2(pci, reg_dbi2, lower_32_bits(size - 1));
dw_pcie_writel_dbi(pci, reg, flags); dw_pcie_writel_dbi(pci, reg, flags);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) { if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
dw_pcie_writel_dbi2(pci, reg + 4, upper_32_bits(size - 1)); dw_pcie_writel_dbi2(pci, reg_dbi2 + 4, upper_32_bits(size - 1));
dw_pcie_writel_dbi(pci, reg + 4, 0); dw_pcie_writel_dbi(pci, reg + 4, 0);
} }
@ -334,7 +351,7 @@ static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
if (!(val & PCI_MSI_FLAGS_ENABLE)) if (!(val & PCI_MSI_FLAGS_ENABLE))
return -EINVAL; return -EINVAL;
val = (val & PCI_MSI_FLAGS_QSIZE) >> 4; val = FIELD_GET(PCI_MSI_FLAGS_QSIZE, val);
return val; return val;
} }
@ -357,7 +374,7 @@ static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS; reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS;
val = dw_pcie_readw_dbi(pci, reg); val = dw_pcie_readw_dbi(pci, reg);
val &= ~PCI_MSI_FLAGS_QMASK; val &= ~PCI_MSI_FLAGS_QMASK;
val |= (interrupts << 1) & PCI_MSI_FLAGS_QMASK; val |= FIELD_PREP(PCI_MSI_FLAGS_QMASK, interrupts);
dw_pcie_dbi_ro_wr_en(pci); dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writew_dbi(pci, reg, val); dw_pcie_writew_dbi(pci, reg, val);
dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_dbi_ro_wr_dis(pci);
@ -584,7 +601,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
reg = ep_func->msix_cap + func_offset + PCI_MSIX_TABLE; reg = ep_func->msix_cap + func_offset + PCI_MSIX_TABLE;
tbl_offset = dw_pcie_readl_dbi(pci, reg); tbl_offset = dw_pcie_readl_dbi(pci, reg);
bir = (tbl_offset & PCI_MSIX_TABLE_BIR); bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
tbl_offset &= PCI_MSIX_TABLE_OFFSET; tbl_offset &= PCI_MSIX_TABLE_OFFSET;
msix_tbl = ep->epf_bar[bir]->addr + tbl_offset; msix_tbl = ep->epf_bar[bir]->addr + tbl_offset;
@ -621,7 +638,11 @@ void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
epc->mem->window.page_size); epc->mem->window.page_size);
pci_epc_mem_exit(epc); pci_epc_mem_exit(epc);
if (ep->ops->deinit)
ep->ops->deinit(ep);
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_exit);
static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap) static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
{ {
@ -723,6 +744,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->phys_base = res->start; ep->phys_base = res->start;
ep->addr_size = resource_size(res); ep->addr_size = resource_size(res);
if (ep->ops->pre_init)
ep->ops->pre_init(ep);
dw_pcie_version_detect(pci); dw_pcie_version_detect(pci);
dw_pcie_iatu_detect(pci); dw_pcie_iatu_detect(pci);
@ -777,7 +801,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->page_size); ep->page_size);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "Failed to initialize address space\n"); dev_err(dev, "Failed to initialize address space\n");
return ret; goto err_ep_deinit;
} }
ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
@ -814,6 +838,10 @@ err_free_epc_mem:
err_exit_epc_mem: err_exit_epc_mem:
pci_epc_mem_exit(epc); pci_epc_mem_exit(epc);
err_ep_deinit:
if (ep->ops->deinit)
ep->ops->deinit(ep);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_init); EXPORT_SYMBOL_GPL(dw_pcie_ep_init);

View File

@ -502,6 +502,9 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret) if (ret)
goto err_stop_link; goto err_stop_link;
if (pp->ops->host_post_init)
pp->ops->host_post_init(pp);
return 0; return 0;
err_stop_link: err_stop_link:

View File

@ -365,6 +365,7 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
if (ret) if (ret)
dev_err(pci->dev, "write DBI address failed\n"); dev_err(pci->dev, "write DBI address failed\n");
} }
EXPORT_SYMBOL_GPL(dw_pcie_write_dbi2);
static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir, static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir,
u32 index) u32 index)
@ -732,6 +733,53 @@ static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen)
} }
static void dw_pcie_link_set_max_link_width(struct dw_pcie *pci, u32 num_lanes)
{
u32 lnkcap, lwsc, plc;
u8 cap;
if (!num_lanes)
return;
/* Set the number of lanes */
plc = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
plc &= ~PORT_LINK_FAST_LINK_MODE;
plc &= ~PORT_LINK_MODE_MASK;
/* Set link width speed control register */
lwsc = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
lwsc &= ~PORT_LOGIC_LINK_WIDTH_MASK;
switch (num_lanes) {
case 1:
plc |= PORT_LINK_MODE_1_LANES;
lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES;
break;
case 2:
plc |= PORT_LINK_MODE_2_LANES;
lwsc |= PORT_LOGIC_LINK_WIDTH_2_LANES;
break;
case 4:
plc |= PORT_LINK_MODE_4_LANES;
lwsc |= PORT_LOGIC_LINK_WIDTH_4_LANES;
break;
case 8:
plc |= PORT_LINK_MODE_8_LANES;
lwsc |= PORT_LOGIC_LINK_WIDTH_8_LANES;
break;
default:
dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes);
return;
}
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, plc);
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, lwsc);
cap = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
lnkcap = dw_pcie_readl_dbi(pci, cap + PCI_EXP_LNKCAP);
lnkcap &= ~PCI_EXP_LNKCAP_MLW;
lnkcap |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, num_lanes);
dw_pcie_writel_dbi(pci, cap + PCI_EXP_LNKCAP, lnkcap);
}
void dw_pcie_iatu_detect(struct dw_pcie *pci) void dw_pcie_iatu_detect(struct dw_pcie *pci)
{ {
int max_region, ob, ib; int max_region, ob, ib;
@ -840,8 +888,14 @@ static int dw_pcie_edma_find_chip(struct dw_pcie *pci)
* Indirect eDMA CSRs access has been completely removed since v5.40a * Indirect eDMA CSRs access has been completely removed since v5.40a
* thus no space is now reserved for the eDMA channels viewport and * thus no space is now reserved for the eDMA channels viewport and
* former DMA CTRL register is no longer fixed to FFs. * former DMA CTRL register is no longer fixed to FFs.
*
* Note that Renesas R-Car S4-8's PCIe controllers for unknown reason
* have zeros in the eDMA CTRL register even though the HW-manual
* explicitly states there must FFs if the unrolled mapping is enabled.
* For such cases the low-level drivers are supposed to manually
* activate the unrolled mapping to bypass the auto-detection procedure.
*/ */
if (dw_pcie_ver_is_ge(pci, 540A)) if (dw_pcie_ver_is_ge(pci, 540A) || dw_pcie_cap_is(pci, EDMA_UNROLL))
val = 0xFFFFFFFF; val = 0xFFFFFFFF;
else else
val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL); val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL);
@ -1013,49 +1067,5 @@ void dw_pcie_setup(struct dw_pcie *pci)
val |= PORT_LINK_DLL_LINK_EN; val |= PORT_LINK_DLL_LINK_EN;
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
if (!pci->num_lanes) { dw_pcie_link_set_max_link_width(pci, pci->num_lanes);
dev_dbg(pci->dev, "Using h/w default number of lanes\n");
return;
}
/* Set the number of lanes */
val &= ~PORT_LINK_FAST_LINK_MODE;
val &= ~PORT_LINK_MODE_MASK;
switch (pci->num_lanes) {
case 1:
val |= PORT_LINK_MODE_1_LANES;
break;
case 2:
val |= PORT_LINK_MODE_2_LANES;
break;
case 4:
val |= PORT_LINK_MODE_4_LANES;
break;
case 8:
val |= PORT_LINK_MODE_8_LANES;
break;
default:
dev_err(pci->dev, "num-lanes %u: invalid value\n", pci->num_lanes);
return;
}
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
/* Set link width speed control register */
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_LINK_WIDTH_MASK;
switch (pci->num_lanes) {
case 1:
val |= PORT_LOGIC_LINK_WIDTH_1_LANES;
break;
case 2:
val |= PORT_LOGIC_LINK_WIDTH_2_LANES;
break;
case 4:
val |= PORT_LOGIC_LINK_WIDTH_4_LANES;
break;
case 8:
val |= PORT_LOGIC_LINK_WIDTH_8_LANES;
break;
}
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
} }

View File

@ -51,8 +51,9 @@
/* DWC PCIe controller capabilities */ /* DWC PCIe controller capabilities */
#define DW_PCIE_CAP_REQ_RES 0 #define DW_PCIE_CAP_REQ_RES 0
#define DW_PCIE_CAP_IATU_UNROLL 1 #define DW_PCIE_CAP_EDMA_UNROLL 1
#define DW_PCIE_CAP_CDM_CHECK 2 #define DW_PCIE_CAP_IATU_UNROLL 2
#define DW_PCIE_CAP_CDM_CHECK 3
#define dw_pcie_cap_is(_pci, _cap) \ #define dw_pcie_cap_is(_pci, _cap) \
test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps) test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps)
@ -301,6 +302,7 @@ enum dw_pcie_ltssm {
struct dw_pcie_host_ops { struct dw_pcie_host_ops {
int (*host_init)(struct dw_pcie_rp *pp); int (*host_init)(struct dw_pcie_rp *pp);
void (*host_deinit)(struct dw_pcie_rp *pp); void (*host_deinit)(struct dw_pcie_rp *pp);
void (*host_post_init)(struct dw_pcie_rp *pp);
int (*msi_host_init)(struct dw_pcie_rp *pp); int (*msi_host_init)(struct dw_pcie_rp *pp);
void (*pme_turn_off)(struct dw_pcie_rp *pp); void (*pme_turn_off)(struct dw_pcie_rp *pp);
}; };
@ -329,7 +331,9 @@ struct dw_pcie_rp {
}; };
struct dw_pcie_ep_ops { struct dw_pcie_ep_ops {
void (*pre_init)(struct dw_pcie_ep *ep);
void (*ep_init)(struct dw_pcie_ep *ep); void (*ep_init)(struct dw_pcie_ep *ep);
void (*deinit)(struct dw_pcie_ep *ep);
int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no,
enum pci_epc_irq_type type, u16 interrupt_num); enum pci_epc_irq_type type, u16 interrupt_num);
const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep); const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep);
@ -341,6 +345,7 @@ struct dw_pcie_ep_ops {
* driver. * driver.
*/ */
unsigned int (*func_conf_select)(struct dw_pcie_ep *ep, u8 func_no); unsigned int (*func_conf_select)(struct dw_pcie_ep *ep, u8 func_no);
unsigned int (*get_dbi2_offset)(struct dw_pcie_ep *ep, u8 func_no);
}; };
struct dw_pcie_ep_func { struct dw_pcie_ep_func {

View File

@ -741,7 +741,7 @@ err:
return ret; return ret;
} }
static int __exit kirin_pcie_remove(struct platform_device *pdev) static int kirin_pcie_remove(struct platform_device *pdev)
{ {
struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev); struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev);
@ -818,7 +818,7 @@ static int kirin_pcie_probe(struct platform_device *pdev)
static struct platform_driver kirin_pcie_driver = { static struct platform_driver kirin_pcie_driver = {
.probe = kirin_pcie_probe, .probe = kirin_pcie_probe,
.remove = __exit_p(kirin_pcie_remove), .remove = kirin_pcie_remove,
.driver = { .driver = {
.name = "kirin-pcie", .name = "kirin-pcie",
.of_match_table = kirin_pcie_match, .of_match_table = kirin_pcie_match,

View File

@ -23,6 +23,7 @@
#include <linux/reset.h> #include <linux/reset.h>
#include <linux/module.h> #include <linux/module.h>
#include "../../pci.h"
#include "pcie-designware.h" #include "pcie-designware.h"
/* PARF registers */ /* PARF registers */
@ -123,6 +124,7 @@
/* ELBI registers */ /* ELBI registers */
#define ELBI_SYS_STTS 0x08 #define ELBI_SYS_STTS 0x08
#define ELBI_CS2_ENABLE 0xa4
/* DBI registers */ /* DBI registers */
#define DBI_CON_STATUS 0x44 #define DBI_CON_STATUS 0x44
@ -135,10 +137,8 @@
#define CORE_RESET_TIME_US_MAX 1005 #define CORE_RESET_TIME_US_MAX 1005
#define WAKE_DELAY_US 2000 /* 2 ms */ #define WAKE_DELAY_US 2000 /* 2 ms */
#define PCIE_GEN1_BW_MBPS 250 #define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \
#define PCIE_GEN2_BW_MBPS 500 Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]))
#define PCIE_GEN3_BW_MBPS 985
#define PCIE_GEN4_BW_MBPS 1969
#define to_pcie_ep(x) dev_get_drvdata((x)->dev) #define to_pcie_ep(x) dev_get_drvdata((x)->dev)
@ -263,10 +263,25 @@ static void qcom_pcie_dw_stop_link(struct dw_pcie *pci)
disable_irq(pcie_ep->perst_irq); disable_irq(pcie_ep->perst_irq);
} }
static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base,
u32 reg, size_t size, u32 val)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
int ret;
writel(1, pcie_ep->elbi + ELBI_CS2_ENABLE);
ret = dw_pcie_write(pci->dbi_base2 + reg, size, val);
if (ret)
dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret);
writel(0, pcie_ep->elbi + ELBI_CS2_ENABLE);
}
static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep)
{ {
struct dw_pcie *pci = &pcie_ep->pci; struct dw_pcie *pci = &pcie_ep->pci;
u32 offset, status, bw; u32 offset, status;
int speed, width; int speed, width;
int ret; int ret;
@ -279,25 +294,7 @@ static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status); speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status); width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
switch (speed) { ret = icc_set_bw(pcie_ep->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
case 1:
bw = MBps_to_icc(PCIE_GEN1_BW_MBPS);
break;
case 2:
bw = MBps_to_icc(PCIE_GEN2_BW_MBPS);
break;
case 3:
bw = MBps_to_icc(PCIE_GEN3_BW_MBPS);
break;
default:
dev_warn(pci->dev, "using default GEN4 bandwidth\n");
fallthrough;
case 4:
bw = MBps_to_icc(PCIE_GEN4_BW_MBPS);
break;
}
ret = icc_set_bw(pcie_ep->icc_mem, 0, width * bw);
if (ret) if (ret)
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret); ret);
@ -335,7 +332,7 @@ static int qcom_pcie_enable_resources(struct qcom_pcie_ep *pcie_ep)
* Set an initial peak bandwidth corresponding to single-lane Gen 1 * Set an initial peak bandwidth corresponding to single-lane Gen 1
* for the pcie-mem path. * for the pcie-mem path.
*/ */
ret = icc_set_bw(pcie_ep->icc_mem, 0, MBps_to_icc(PCIE_GEN1_BW_MBPS)); ret = icc_set_bw(pcie_ep->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) { if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret); ret);
@ -519,6 +516,7 @@ static const struct dw_pcie_ops pci_ops = {
.link_up = qcom_pcie_dw_link_up, .link_up = qcom_pcie_dw_link_up,
.start_link = qcom_pcie_dw_start_link, .start_link = qcom_pcie_dw_start_link,
.stop_link = qcom_pcie_dw_stop_link, .stop_link = qcom_pcie_dw_stop_link,
.write_dbi2 = qcom_pcie_dw_write_dbi2,
}; };
static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev, static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev,

View File

@ -147,6 +147,9 @@
#define QCOM_PCIE_CRC8_POLYNOMIAL (BIT(2) | BIT(1) | BIT(0)) #define QCOM_PCIE_CRC8_POLYNOMIAL (BIT(2) | BIT(1) | BIT(0))
#define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \
Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]))
#define QCOM_PCIE_1_0_0_MAX_CLOCKS 4 #define QCOM_PCIE_1_0_0_MAX_CLOCKS 4
struct qcom_pcie_resources_1_0_0 { struct qcom_pcie_resources_1_0_0 {
struct clk_bulk_data clks[QCOM_PCIE_1_0_0_MAX_CLOCKS]; struct clk_bulk_data clks[QCOM_PCIE_1_0_0_MAX_CLOCKS];
@ -218,6 +221,7 @@ struct qcom_pcie_ops {
int (*get_resources)(struct qcom_pcie *pcie); int (*get_resources)(struct qcom_pcie *pcie);
int (*init)(struct qcom_pcie *pcie); int (*init)(struct qcom_pcie *pcie);
int (*post_init)(struct qcom_pcie *pcie); int (*post_init)(struct qcom_pcie *pcie);
void (*host_post_init)(struct qcom_pcie *pcie);
void (*deinit)(struct qcom_pcie *pcie); void (*deinit)(struct qcom_pcie *pcie);
void (*ltssm_enable)(struct qcom_pcie *pcie); void (*ltssm_enable)(struct qcom_pcie *pcie);
int (*config_sid)(struct qcom_pcie *pcie); int (*config_sid)(struct qcom_pcie *pcie);
@ -962,6 +966,22 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
return 0; return 0;
} }
static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata)
{
/* Downstream devices need to be in D0 state before enabling PCI PM substates */
pci_set_power_state(pdev, PCI_D0);
pci_enable_link_state(pdev, PCIE_LINK_STATE_ALL);
return 0;
}
static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie)
{
struct dw_pcie_rp *pp = &pcie->pci->pp;
pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL);
}
static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
{ {
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
@ -1214,9 +1234,19 @@ static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp)
pcie->cfg->ops->deinit(pcie); pcie->cfg->ops->deinit(pcie);
} }
static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct qcom_pcie *pcie = to_qcom_pcie(pci);
if (pcie->cfg->ops->host_post_init)
pcie->cfg->ops->host_post_init(pcie);
}
static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { static const struct dw_pcie_host_ops qcom_pcie_dw_ops = {
.host_init = qcom_pcie_host_init, .host_init = qcom_pcie_host_init,
.host_deinit = qcom_pcie_host_deinit, .host_deinit = qcom_pcie_host_deinit,
.host_post_init = qcom_pcie_host_post_init,
}; };
/* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */
@ -1278,6 +1308,7 @@ static const struct qcom_pcie_ops ops_1_9_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0, .get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0, .init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0, .post_init = qcom_pcie_post_init_2_7_0,
.host_post_init = qcom_pcie_host_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0, .deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
.config_sid = qcom_pcie_config_sid_1_9_0, .config_sid = qcom_pcie_config_sid_1_9_0,
@ -1345,7 +1376,7 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
* Set an initial peak bandwidth corresponding to single-lane Gen 1 * Set an initial peak bandwidth corresponding to single-lane Gen 1
* for the pcie-mem path. * for the pcie-mem path.
*/ */
ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) { if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret); ret);
@ -1358,7 +1389,7 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
static void qcom_pcie_icc_update(struct qcom_pcie *pcie) static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
{ {
struct dw_pcie *pci = pcie->pci; struct dw_pcie *pci = pcie->pci;
u32 offset, status, bw; u32 offset, status;
int speed, width; int speed, width;
int ret; int ret;
@ -1375,22 +1406,7 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status); speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status); width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
switch (speed) { ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
case 1:
bw = MBps_to_icc(250);
break;
case 2:
bw = MBps_to_icc(500);
break;
default:
WARN_ON_ONCE(1);
fallthrough;
case 3:
bw = MBps_to_icc(985);
break;
}
ret = icc_set_bw(pcie->icc_mem, 0, width * bw);
if (ret) { if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret); ret);

View File

@ -0,0 +1,527 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* PCIe controller driver for Renesas R-Car Gen4 Series SoCs
* Copyright (C) 2022-2023 Renesas Electronics Corporation
*/
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/reset.h>
#include "../../pci.h"
#include "pcie-designware.h"
/* Renesas-specific */
/* PCIe Mode Setting Register 0 */
#define PCIEMSR0 0x0000
#define BIFUR_MOD_SET_ON BIT(0)
#define DEVICE_TYPE_EP 0
#define DEVICE_TYPE_RC BIT(4)
/* PCIe Interrupt Status 0 */
#define PCIEINTSTS0 0x0084
/* PCIe Interrupt Status 0 Enable */
#define PCIEINTSTS0EN 0x0310
#define MSI_CTRL_INT BIT(26)
#define SMLH_LINK_UP BIT(7)
#define RDLH_LINK_UP BIT(6)
/* PCIe DMA Interrupt Status Enable */
#define PCIEDMAINTSTSEN 0x0314
#define PCIEDMAINTSTSEN_INIT GENMASK(15, 0)
/* PCIe Reset Control Register 1 */
#define PCIERSTCTRL1 0x0014
#define APP_HOLD_PHY_RST BIT(16)
#define APP_LTSSM_ENABLE BIT(0)
#define RCAR_NUM_SPEED_CHANGE_RETRIES 10
#define RCAR_MAX_LINK_SPEED 4
#define RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET 0x1000
#define RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET 0x800
struct rcar_gen4_pcie {
struct dw_pcie dw;
void __iomem *base;
struct platform_device *pdev;
enum dw_pcie_device_mode mode;
};
#define to_rcar_gen4_pcie(_dw) container_of(_dw, struct rcar_gen4_pcie, dw)
/* Common */
static void rcar_gen4_pcie_ltssm_enable(struct rcar_gen4_pcie *rcar,
bool enable)
{
u32 val;
val = readl(rcar->base + PCIERSTCTRL1);
if (enable) {
val |= APP_LTSSM_ENABLE;
val &= ~APP_HOLD_PHY_RST;
} else {
/*
* Since the datasheet of R-Car doesn't mention how to assert
* the APP_HOLD_PHY_RST, don't assert it again. Otherwise,
* hang-up issue happened in the dw_edma_core_off() when
* the controller didn't detect a PCI device.
*/
val &= ~APP_LTSSM_ENABLE;
}
writel(val, rcar->base + PCIERSTCTRL1);
}
static int rcar_gen4_pcie_link_up(struct dw_pcie *dw)
{
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
u32 val, mask;
val = readl(rcar->base + PCIEINTSTS0);
mask = RDLH_LINK_UP | SMLH_LINK_UP;
return (val & mask) == mask;
}
/*
* Manually initiate the speed change. Return 0 if change succeeded; otherwise
* -ETIMEDOUT.
*/
static int rcar_gen4_pcie_speed_change(struct dw_pcie *dw)
{
u32 val;
int i;
val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_SPEED_CHANGE;
dw_pcie_writel_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL);
val |= PORT_LOGIC_SPEED_CHANGE;
dw_pcie_writel_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
for (i = 0; i < RCAR_NUM_SPEED_CHANGE_RETRIES; i++) {
val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL);
if (!(val & PORT_LOGIC_SPEED_CHANGE))
return 0;
usleep_range(10000, 11000);
}
return -ETIMEDOUT;
}
/*
* Enable LTSSM of this controller and manually initiate the speed change.
* Always return 0.
*/
static int rcar_gen4_pcie_start_link(struct dw_pcie *dw)
{
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
int i, changes;
rcar_gen4_pcie_ltssm_enable(rcar, true);
/*
* Require direct speed change with retrying here if the link_gen is
* PCIe Gen2 or higher.
*/
changes = min_not_zero(dw->link_gen, RCAR_MAX_LINK_SPEED) - 1;
/*
* Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained.
* So, this needs remaining times for up to PCIe Gen4 if RC mode.
*/
if (changes && rcar->mode == DW_PCIE_RC_TYPE)
changes--;
for (i = 0; i < changes; i++) {
/* It may not be connected in EP mode yet. So, break the loop */
if (rcar_gen4_pcie_speed_change(dw))
break;
}
return 0;
}
static void rcar_gen4_pcie_stop_link(struct dw_pcie *dw)
{
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
rcar_gen4_pcie_ltssm_enable(rcar, false);
}
static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie *dw = &rcar->dw;
u32 val;
int ret;
ret = clk_bulk_prepare_enable(DW_PCIE_NUM_CORE_CLKS, dw->core_clks);
if (ret) {
dev_err(dw->dev, "Enabling core clocks failed\n");
return ret;
}
if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc))
reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
val = readl(rcar->base + PCIEMSR0);
if (rcar->mode == DW_PCIE_RC_TYPE) {
val |= DEVICE_TYPE_RC;
} else if (rcar->mode == DW_PCIE_EP_TYPE) {
val |= DEVICE_TYPE_EP;
} else {
ret = -EINVAL;
goto err_unprepare;
}
if (dw->num_lanes < 4)
val |= BIFUR_MOD_SET_ON;
writel(val, rcar->base + PCIEMSR0);
ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
if (ret)
goto err_unprepare;
return 0;
err_unprepare:
clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, dw->core_clks);
return ret;
}
static void rcar_gen4_pcie_common_deinit(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie *dw = &rcar->dw;
reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, dw->core_clks);
}
static int rcar_gen4_pcie_prepare(struct rcar_gen4_pcie *rcar)
{
struct device *dev = rcar->dw.dev;
int err;
pm_runtime_enable(dev);
err = pm_runtime_resume_and_get(dev);
if (err < 0) {
dev_err(dev, "Runtime resume failed\n");
pm_runtime_disable(dev);
}
return err;
}
static void rcar_gen4_pcie_unprepare(struct rcar_gen4_pcie *rcar)
{
struct device *dev = rcar->dw.dev;
pm_runtime_put(dev);
pm_runtime_disable(dev);
}
static int rcar_gen4_pcie_get_resources(struct rcar_gen4_pcie *rcar)
{
/* Renesas-specific registers */
rcar->base = devm_platform_ioremap_resource_byname(rcar->pdev, "app");
return PTR_ERR_OR_ZERO(rcar->base);
}
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = rcar_gen4_pcie_start_link,
.stop_link = rcar_gen4_pcie_stop_link,
.link_up = rcar_gen4_pcie_link_up,
};
static struct rcar_gen4_pcie *rcar_gen4_pcie_alloc(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rcar_gen4_pcie *rcar;
rcar = devm_kzalloc(dev, sizeof(*rcar), GFP_KERNEL);
if (!rcar)
return ERR_PTR(-ENOMEM);
rcar->dw.ops = &dw_pcie_ops;
rcar->dw.dev = dev;
rcar->pdev = pdev;
dw_pcie_cap_set(&rcar->dw, EDMA_UNROLL);
dw_pcie_cap_set(&rcar->dw, REQ_RES);
platform_set_drvdata(pdev, rcar);
return rcar;
}
/* Host mode */
static int rcar_gen4_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *dw = to_dw_pcie_from_pp(pp);
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
int ret;
u32 val;
gpiod_set_value_cansleep(dw->pe_rst, 1);
ret = rcar_gen4_pcie_common_init(rcar);
if (ret)
return ret;
/*
* According to the section 3.5.7.2 "RC Mode" in DWC PCIe Dual Mode
* Rev.5.20a and 3.5.6.1 "RC mode" in DWC PCIe RC databook v5.20a, we
* should disable two BARs to avoid unnecessary memory assignment
* during device enumeration.
*/
dw_pcie_writel_dbi2(dw, PCI_BASE_ADDRESS_0, 0x0);
dw_pcie_writel_dbi2(dw, PCI_BASE_ADDRESS_1, 0x0);
/* Enable MSI interrupt signal */
val = readl(rcar->base + PCIEINTSTS0EN);
val |= MSI_CTRL_INT;
writel(val, rcar->base + PCIEINTSTS0EN);
msleep(PCIE_T_PVPERL_MS); /* pe_rst requires 100msec delay */
gpiod_set_value_cansleep(dw->pe_rst, 0);
return 0;
}
static void rcar_gen4_pcie_host_deinit(struct dw_pcie_rp *pp)
{
struct dw_pcie *dw = to_dw_pcie_from_pp(pp);
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
gpiod_set_value_cansleep(dw->pe_rst, 1);
rcar_gen4_pcie_common_deinit(rcar);
}
static const struct dw_pcie_host_ops rcar_gen4_pcie_host_ops = {
.host_init = rcar_gen4_pcie_host_init,
.host_deinit = rcar_gen4_pcie_host_deinit,
};
static int rcar_gen4_add_dw_pcie_rp(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie_rp *pp = &rcar->dw.pp;
if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_HOST))
return -ENODEV;
pp->num_vectors = MAX_MSI_IRQS;
pp->ops = &rcar_gen4_pcie_host_ops;
return dw_pcie_host_init(pp);
}
static void rcar_gen4_remove_dw_pcie_rp(struct rcar_gen4_pcie *rcar)
{
dw_pcie_host_deinit(&rcar->dw.pp);
}
/* Endpoint mode */
static void rcar_gen4_pcie_ep_pre_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
int ret;
ret = rcar_gen4_pcie_common_init(rcar);
if (ret)
return;
writel(PCIEDMAINTSTSEN_INIT, rcar->base + PCIEDMAINTSTSEN);
}
static void rcar_gen4_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static void rcar_gen4_pcie_ep_deinit(struct dw_pcie_ep *ep)
{
struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
writel(0, rcar->base + PCIEDMAINTSTSEN);
rcar_gen4_pcie_common_deinit(rcar);
}
static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
enum pci_epc_irq_type type,
u16 interrupt_num)
{
struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
switch (type) {
case PCI_EPC_IRQ_LEGACY:
return dw_pcie_ep_raise_legacy_irq(ep, func_no);
case PCI_EPC_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
default:
dev_err(dw->dev, "Unknown IRQ type\n");
return -EINVAL;
}
return 0;
}
static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,
.reserved_bar = 1 << BAR_1 | 1 << BAR_3 | 1 << BAR_5,
.align = SZ_1M,
};
static const struct pci_epc_features*
rcar_gen4_pcie_ep_get_features(struct dw_pcie_ep *ep)
{
return &rcar_gen4_pcie_epc_features;
}
static unsigned int rcar_gen4_pcie_ep_func_conf_select(struct dw_pcie_ep *ep,
u8 func_no)
{
return func_no * RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET;
}
static unsigned int rcar_gen4_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
u8 func_no)
{
return func_no * RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET;
}
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.pre_init = rcar_gen4_pcie_ep_pre_init,
.ep_init = rcar_gen4_pcie_ep_init,
.deinit = rcar_gen4_pcie_ep_deinit,
.raise_irq = rcar_gen4_pcie_ep_raise_irq,
.get_features = rcar_gen4_pcie_ep_get_features,
.func_conf_select = rcar_gen4_pcie_ep_func_conf_select,
.get_dbi2_offset = rcar_gen4_pcie_ep_get_dbi2_offset,
};
static int rcar_gen4_add_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie_ep *ep = &rcar->dw.ep;
if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_EP))
return -ENODEV;
ep->ops = &pcie_ep_ops;
return dw_pcie_ep_init(ep);
}
static void rcar_gen4_remove_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
{
dw_pcie_ep_exit(&rcar->dw.ep);
}
/* Common */
static int rcar_gen4_add_dw_pcie(struct rcar_gen4_pcie *rcar)
{
rcar->mode = (enum dw_pcie_device_mode)of_device_get_match_data(&rcar->pdev->dev);
switch (rcar->mode) {
case DW_PCIE_RC_TYPE:
return rcar_gen4_add_dw_pcie_rp(rcar);
case DW_PCIE_EP_TYPE:
return rcar_gen4_add_dw_pcie_ep(rcar);
default:
return -EINVAL;
}
}
static int rcar_gen4_pcie_probe(struct platform_device *pdev)
{
struct rcar_gen4_pcie *rcar;
int err;
rcar = rcar_gen4_pcie_alloc(pdev);
if (IS_ERR(rcar))
return PTR_ERR(rcar);
err = rcar_gen4_pcie_get_resources(rcar);
if (err)
return err;
err = rcar_gen4_pcie_prepare(rcar);
if (err)
return err;
err = rcar_gen4_add_dw_pcie(rcar);
if (err)
goto err_unprepare;
return 0;
err_unprepare:
rcar_gen4_pcie_unprepare(rcar);
return err;
}
static void rcar_gen4_remove_dw_pcie(struct rcar_gen4_pcie *rcar)
{
switch (rcar->mode) {
case DW_PCIE_RC_TYPE:
rcar_gen4_remove_dw_pcie_rp(rcar);
break;
case DW_PCIE_EP_TYPE:
rcar_gen4_remove_dw_pcie_ep(rcar);
break;
default:
break;
}
}
static void rcar_gen4_pcie_remove(struct platform_device *pdev)
{
struct rcar_gen4_pcie *rcar = platform_get_drvdata(pdev);
rcar_gen4_remove_dw_pcie(rcar);
rcar_gen4_pcie_unprepare(rcar);
}
static const struct of_device_id rcar_gen4_pcie_of_match[] = {
{
.compatible = "renesas,rcar-gen4-pcie",
.data = (void *)DW_PCIE_RC_TYPE,
},
{
.compatible = "renesas,rcar-gen4-pcie-ep",
.data = (void *)DW_PCIE_EP_TYPE,
},
{},
};
MODULE_DEVICE_TABLE(of, rcar_gen4_pcie_of_match);
static struct platform_driver rcar_gen4_pcie_driver = {
.driver = {
.name = "pcie-rcar-gen4",
.of_match_table = rcar_gen4_pcie_of_match,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rcar_gen4_pcie_probe,
.remove_new = rcar_gen4_pcie_remove,
};
module_platform_driver(rcar_gen4_pcie_driver);
MODULE_DESCRIPTION("Renesas R-Car Gen4 PCIe controller driver");
MODULE_LICENSE("GPL");

View File

@ -9,6 +9,7 @@
* Author: Vidya Sagar <vidyas@nvidia.com> * Author: Vidya Sagar <vidyas@nvidia.com>
*/ */
#include <linux/bitfield.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/delay.h> #include <linux/delay.h>
@ -125,7 +126,7 @@
#define APPL_LTR_MSG_1 0xC4 #define APPL_LTR_MSG_1 0xC4
#define LTR_MSG_REQ BIT(15) #define LTR_MSG_REQ BIT(15)
#define LTR_MST_NO_SNOOP_SHIFT 16 #define LTR_NOSNOOP_MSG_REQ BIT(31)
#define APPL_LTR_MSG_2 0xC8 #define APPL_LTR_MSG_2 0xC8
#define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3) #define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3)
@ -321,9 +322,9 @@ static void tegra_pcie_icc_set(struct tegra_pcie_dw *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, val); speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, val);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val); width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val);
val = width * (PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]) / BITS_PER_BYTE); val = width * PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]);
if (icc_set_bw(pcie->icc_path, MBps_to_icc(val), 0)) if (icc_set_bw(pcie->icc_path, Mbps_to_icc(val), 0))
dev_err(pcie->dev, "can't set bw[%u]\n", val); dev_err(pcie->dev, "can't set bw[%u]\n", val);
if (speed >= ARRAY_SIZE(pcie_gen_freq)) if (speed >= ARRAY_SIZE(pcie_gen_freq))
@ -346,8 +347,7 @@ static void apply_bad_link_workaround(struct dw_pcie_rp *pp)
*/ */
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
if (val & PCI_EXP_LNKSTA_LBMS) { if (val & PCI_EXP_LNKSTA_LBMS) {
current_link_width = (val & PCI_EXP_LNKSTA_NLW) >> current_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val);
PCI_EXP_LNKSTA_NLW_SHIFT;
if (pcie->init_link_width > current_link_width) { if (pcie->init_link_width > current_link_width) {
dev_warn(pci->dev, "PCIe link is bad, width reduced\n"); dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
@ -496,8 +496,12 @@ static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
ktime_t timeout; ktime_t timeout;
/* 110us for both snoop and no-snoop */ /* 110us for both snoop and no-snoop */
val = 110 | (2 << PCI_LTR_SCALE_SHIFT) | LTR_MSG_REQ; val = FIELD_PREP(PCI_LTR_VALUE_MASK, 110) |
val |= (val << LTR_MST_NO_SNOOP_SHIFT); FIELD_PREP(PCI_LTR_SCALE_MASK, 2) |
LTR_MSG_REQ |
FIELD_PREP(PCI_LTR_NOSNOOP_VALUE, 110) |
FIELD_PREP(PCI_LTR_NOSNOOP_SCALE, 2) |
LTR_NOSNOOP_MSG_REQ;
appl_writel(pcie, val, APPL_LTR_MSG_1); appl_writel(pcie, val, APPL_LTR_MSG_1);
/* Send LTR upstream */ /* Send LTR upstream */
@ -760,8 +764,7 @@ static void tegra_pcie_enable_system_interrupts(struct dw_pcie_rp *pp)
val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
PCI_EXP_LNKSTA); PCI_EXP_LNKSTA);
pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >> pcie->init_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val_w);
PCI_EXP_LNKSTA_NLW_SHIFT;
val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL); PCI_EXP_LNKCTL);
@ -917,12 +920,6 @@ static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp)
AMBA_ERROR_RESPONSE_CRS_SHIFT); AMBA_ERROR_RESPONSE_CRS_SHIFT);
dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val); dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
/* Configure Max lane width from DT */
val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
val &= ~PCI_EXP_LNKCAP_MLW;
val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
/* Clear Slot Clock Configuration bit if SRNS configuration */ /* Clear Slot Clock Configuration bit if SRNS configuration */
if (pcie->enable_srns) { if (pcie->enable_srns) {
val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +

View File

@ -539,7 +539,7 @@ static bool mobiveil_pcie_is_bridge(struct mobiveil_pcie *pcie)
u32 header_type; u32 header_type;
header_type = mobiveil_csr_readb(pcie, PCI_HEADER_TYPE); header_type = mobiveil_csr_readb(pcie, PCI_HEADER_TYPE);
header_type &= 0x7f; header_type &= PCI_HEADER_TYPE_MASK;
return header_type == PCI_HEADER_TYPE_BRIDGE; return header_type == PCI_HEADER_TYPE_BRIDGE;
} }

View File

@ -545,7 +545,7 @@ struct hv_pcidev_description {
struct hv_dr_state { struct hv_dr_state {
struct list_head list_entry; struct list_head list_entry;
u32 device_count; u32 device_count;
struct hv_pcidev_description func[]; struct hv_pcidev_description func[] __counted_by(device_count);
}; };
struct hv_pci_dev { struct hv_pci_dev {

View File

@ -264,7 +264,7 @@ static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port)
*/ */
lnkcap = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP); lnkcap = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP);
lnkcap &= ~PCI_EXP_LNKCAP_MLW; lnkcap &= ~PCI_EXP_LNKCAP_MLW;
lnkcap |= (port->is_x4 ? 4 : 1) << 4; lnkcap |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, port->is_x4 ? 4 : 1);
mvebu_writel(port, lnkcap, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP); mvebu_writel(port, lnkcap, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP);
/* Disable Root Bridge I/O space, memory space and bus mastering. */ /* Disable Root Bridge I/O space, memory space and bus mastering. */

View File

@ -163,10 +163,11 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val) int where, int size, u32 *val)
{ {
struct xgene_pcie *port = pcie_bus_to_port(bus); struct xgene_pcie *port = pcie_bus_to_port(bus);
int ret;
if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) != ret = pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val);
PCIBIOS_SUCCESSFUL) if (ret != PCIBIOS_SUCCESSFUL)
return PCIBIOS_DEVICE_NOT_FOUND; return ret;
/* /*
* The v1 controller has a bug in its Configuration Request Retry * The v1 controller has a bug in its Configuration Request Retry

View File

@ -783,7 +783,7 @@ static int iproc_pcie_check_link(struct iproc_pcie *pcie)
/* make sure we are not in EP mode */ /* make sure we are not in EP mode */
iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type);
if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) { if ((hdr_type & PCI_HEADER_TYPE_MASK) != PCI_HEADER_TYPE_BRIDGE) {
dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type); dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type);
return -EFAULT; return -EFAULT;
} }

View File

@ -43,7 +43,7 @@ static void rcar_pcie_ep_hw_init(struct rcar_pcie *pcie)
rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP);
rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS),
PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ENDPOINT << 4); PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ENDPOINT << 4);
rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), PCI_HEADER_TYPE_MASK,
PCI_HEADER_TYPE_NORMAL); PCI_HEADER_TYPE_NORMAL);
/* Write out the physical slot number = 0 */ /* Write out the physical slot number = 0 */

View File

@ -460,7 +460,7 @@ static int rcar_pcie_hw_init(struct rcar_pcie *pcie)
rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP);
rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS),
PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4);
rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), PCI_HEADER_TYPE_MASK,
PCI_HEADER_TYPE_BRIDGE); PCI_HEADER_TYPE_BRIDGE);
/* Enable data link layer active state reporting */ /* Enable data link layer active state reporting */

View File

@ -0,0 +1,31 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* (C) Copyright 2023, Xilinx, Inc.
*/
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
/* Interrupt registers definitions */
#define XILINX_PCIE_INTR_LINK_DOWN 0
#define XILINX_PCIE_INTR_HOT_RESET 3
#define XILINX_PCIE_INTR_CFG_PCIE_TIMEOUT 4
#define XILINX_PCIE_INTR_CFG_TIMEOUT 8
#define XILINX_PCIE_INTR_CORRECTABLE 9
#define XILINX_PCIE_INTR_NONFATAL 10
#define XILINX_PCIE_INTR_FATAL 11
#define XILINX_PCIE_INTR_CFG_ERR_POISON 12
#define XILINX_PCIE_INTR_PME_TO_ACK_RCVD 15
#define XILINX_PCIE_INTR_INTX 16
#define XILINX_PCIE_INTR_PM_PME_RCVD 17
#define XILINX_PCIE_INTR_MSI 17
#define XILINX_PCIE_INTR_SLV_UNSUPP 20
#define XILINX_PCIE_INTR_SLV_UNEXP 21
#define XILINX_PCIE_INTR_SLV_COMPL 22
#define XILINX_PCIE_INTR_SLV_ERRP 23
#define XILINX_PCIE_INTR_SLV_CMPABT 24
#define XILINX_PCIE_INTR_SLV_ILLBUR 25
#define XILINX_PCIE_INTR_MST_DECERR 26
#define XILINX_PCIE_INTR_MST_SLVERR 27
#define XILINX_PCIE_INTR_SLV_PCIE_TIMEOUT 28

View File

@ -16,11 +16,9 @@
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_pci.h> #include <linux/of_pci.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/pci-ecam.h>
#include "../pci.h" #include "../pci.h"
#include "pcie-xilinx-common.h"
/* Register definitions */ /* Register definitions */
#define XILINX_CPM_PCIE_REG_IDR 0x00000E10 #define XILINX_CPM_PCIE_REG_IDR 0x00000E10
@ -38,29 +36,7 @@
#define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8 #define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8
#define XILINX_CPM_PCIE_IR_LOCAL BIT(0) #define XILINX_CPM_PCIE_IR_LOCAL BIT(0)
/* Interrupt registers definitions */ #define IMR(x) BIT(XILINX_PCIE_INTR_ ##x)
#define XILINX_CPM_PCIE_INTR_LINK_DOWN 0
#define XILINX_CPM_PCIE_INTR_HOT_RESET 3
#define XILINX_CPM_PCIE_INTR_CFG_PCIE_TIMEOUT 4
#define XILINX_CPM_PCIE_INTR_CFG_TIMEOUT 8
#define XILINX_CPM_PCIE_INTR_CORRECTABLE 9
#define XILINX_CPM_PCIE_INTR_NONFATAL 10
#define XILINX_CPM_PCIE_INTR_FATAL 11
#define XILINX_CPM_PCIE_INTR_CFG_ERR_POISON 12
#define XILINX_CPM_PCIE_INTR_PME_TO_ACK_RCVD 15
#define XILINX_CPM_PCIE_INTR_INTX 16
#define XILINX_CPM_PCIE_INTR_PM_PME_RCVD 17
#define XILINX_CPM_PCIE_INTR_SLV_UNSUPP 20
#define XILINX_CPM_PCIE_INTR_SLV_UNEXP 21
#define XILINX_CPM_PCIE_INTR_SLV_COMPL 22
#define XILINX_CPM_PCIE_INTR_SLV_ERRP 23
#define XILINX_CPM_PCIE_INTR_SLV_CMPABT 24
#define XILINX_CPM_PCIE_INTR_SLV_ILLBUR 25
#define XILINX_CPM_PCIE_INTR_MST_DECERR 26
#define XILINX_CPM_PCIE_INTR_MST_SLVERR 27
#define XILINX_CPM_PCIE_INTR_SLV_PCIE_TIMEOUT 28
#define IMR(x) BIT(XILINX_CPM_PCIE_INTR_ ##x)
#define XILINX_CPM_PCIE_IMR_ALL_MASK \ #define XILINX_CPM_PCIE_IMR_ALL_MASK \
( \ ( \
@ -323,7 +299,7 @@ static void xilinx_cpm_pcie_event_flow(struct irq_desc *desc)
} }
#define _IC(x, s) \ #define _IC(x, s) \
[XILINX_CPM_PCIE_INTR_ ## x] = { __stringify(x), s } [XILINX_PCIE_INTR_ ## x] = { __stringify(x), s }
static const struct { static const struct {
const char *sym; const char *sym;
@ -359,9 +335,9 @@ static irqreturn_t xilinx_cpm_pcie_intr_handler(int irq, void *dev_id)
d = irq_domain_get_irq_data(port->cpm_domain, irq); d = irq_domain_get_irq_data(port->cpm_domain, irq);
switch (d->hwirq) { switch (d->hwirq) {
case XILINX_CPM_PCIE_INTR_CORRECTABLE: case XILINX_PCIE_INTR_CORRECTABLE:
case XILINX_CPM_PCIE_INTR_NONFATAL: case XILINX_PCIE_INTR_NONFATAL:
case XILINX_CPM_PCIE_INTR_FATAL: case XILINX_PCIE_INTR_FATAL:
cpm_pcie_clear_err_interrupts(port); cpm_pcie_clear_err_interrupts(port);
fallthrough; fallthrough;
@ -466,7 +442,7 @@ static int xilinx_cpm_setup_irq(struct xilinx_cpm_pcie *port)
} }
port->intx_irq = irq_create_mapping(port->cpm_domain, port->intx_irq = irq_create_mapping(port->cpm_domain,
XILINX_CPM_PCIE_INTR_INTX); XILINX_PCIE_INTR_INTX);
if (!port->intx_irq) { if (!port->intx_irq) {
dev_err(dev, "Failed to map INTx interrupt\n"); dev_err(dev, "Failed to map INTx interrupt\n");
return -ENXIO; return -ENXIO;

View File

@ -0,0 +1,814 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* PCIe host controller driver for Xilinx XDMA PCIe Bridge
*
* Copyright (C) 2023 Xilinx, Inc. All rights reserved.
*/
#include <linux/bitfield.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include "../pci.h"
#include "pcie-xilinx-common.h"
/* Register definitions */
#define XILINX_PCIE_DMA_REG_IDR 0x00000138
#define XILINX_PCIE_DMA_REG_IMR 0x0000013c
#define XILINX_PCIE_DMA_REG_PSCR 0x00000144
#define XILINX_PCIE_DMA_REG_RPSC 0x00000148
#define XILINX_PCIE_DMA_REG_MSIBASE1 0x0000014c
#define XILINX_PCIE_DMA_REG_MSIBASE2 0x00000150
#define XILINX_PCIE_DMA_REG_RPEFR 0x00000154
#define XILINX_PCIE_DMA_REG_IDRN 0x00000160
#define XILINX_PCIE_DMA_REG_IDRN_MASK 0x00000164
#define XILINX_PCIE_DMA_REG_MSI_LOW 0x00000170
#define XILINX_PCIE_DMA_REG_MSI_HI 0x00000174
#define XILINX_PCIE_DMA_REG_MSI_LOW_MASK 0x00000178
#define XILINX_PCIE_DMA_REG_MSI_HI_MASK 0x0000017c
#define IMR(x) BIT(XILINX_PCIE_INTR_ ##x)
#define XILINX_PCIE_INTR_IMR_ALL_MASK \
( \
IMR(LINK_DOWN) | \
IMR(HOT_RESET) | \
IMR(CFG_TIMEOUT) | \
IMR(CORRECTABLE) | \
IMR(NONFATAL) | \
IMR(FATAL) | \
IMR(INTX) | \
IMR(MSI) | \
IMR(SLV_UNSUPP) | \
IMR(SLV_UNEXP) | \
IMR(SLV_COMPL) | \
IMR(SLV_ERRP) | \
IMR(SLV_CMPABT) | \
IMR(SLV_ILLBUR) | \
IMR(MST_DECERR) | \
IMR(MST_SLVERR) | \
)
#define XILINX_PCIE_DMA_IMR_ALL_MASK 0x0ff30fe9
#define XILINX_PCIE_DMA_IDR_ALL_MASK 0xffffffff
#define XILINX_PCIE_DMA_IDRN_MASK GENMASK(19, 16)
/* Root Port Error Register definitions */
#define XILINX_PCIE_DMA_RPEFR_ERR_VALID BIT(18)
#define XILINX_PCIE_DMA_RPEFR_REQ_ID GENMASK(15, 0)
#define XILINX_PCIE_DMA_RPEFR_ALL_MASK 0xffffffff
/* Root Port Interrupt Register definitions */
#define XILINX_PCIE_DMA_IDRN_SHIFT 16
/* Root Port Status/control Register definitions */
#define XILINX_PCIE_DMA_REG_RPSC_BEN BIT(0)
/* Phy Status/Control Register definitions */
#define XILINX_PCIE_DMA_REG_PSCR_LNKUP BIT(11)
/* Number of MSI IRQs */
#define XILINX_NUM_MSI_IRQS 64
struct xilinx_msi {
struct irq_domain *msi_domain;
unsigned long *bitmap;
struct irq_domain *dev_domain;
struct mutex lock; /* Protect bitmap variable */
int irq_msi0;
int irq_msi1;
};
/**
* struct pl_dma_pcie - PCIe port information
* @dev: Device pointer
* @reg_base: IO Mapped Register Base
* @irq: Interrupt number
* @cfg: Holds mappings of config space window
* @phys_reg_base: Physical address of reg base
* @intx_domain: Legacy IRQ domain pointer
* @pldma_domain: PL DMA IRQ domain pointer
* @resources: Bus Resources
* @msi: MSI information
* @intx_irq: INTx error interrupt number
* @lock: Lock protecting shared register access
*/
struct pl_dma_pcie {
struct device *dev;
void __iomem *reg_base;
int irq;
struct pci_config_window *cfg;
phys_addr_t phys_reg_base;
struct irq_domain *intx_domain;
struct irq_domain *pldma_domain;
struct list_head resources;
struct xilinx_msi msi;
int intx_irq;
raw_spinlock_t lock;
};
static inline u32 pcie_read(struct pl_dma_pcie *port, u32 reg)
{
return readl(port->reg_base + reg);
}
static inline void pcie_write(struct pl_dma_pcie *port, u32 val, u32 reg)
{
writel(val, port->reg_base + reg);
}
static inline bool xilinx_pl_dma_pcie_link_up(struct pl_dma_pcie *port)
{
return (pcie_read(port, XILINX_PCIE_DMA_REG_PSCR) &
XILINX_PCIE_DMA_REG_PSCR_LNKUP) ? true : false;
}
static void xilinx_pl_dma_pcie_clear_err_interrupts(struct pl_dma_pcie *port)
{
unsigned long val = pcie_read(port, XILINX_PCIE_DMA_REG_RPEFR);
if (val & XILINX_PCIE_DMA_RPEFR_ERR_VALID) {
dev_dbg(port->dev, "Requester ID %lu\n",
val & XILINX_PCIE_DMA_RPEFR_REQ_ID);
pcie_write(port, XILINX_PCIE_DMA_RPEFR_ALL_MASK,
XILINX_PCIE_DMA_REG_RPEFR);
}
}
static bool xilinx_pl_dma_pcie_valid_device(struct pci_bus *bus,
unsigned int devfn)
{
struct pl_dma_pcie *port = bus->sysdata;
if (!pci_is_root_bus(bus)) {
/*
* Checking whether the link is up is the last line of
* defense, and this check is inherently racy by definition.
* Sending a PIO request to a downstream device when the link is
* down causes an unrecoverable error, and a reset of the entire
* PCIe controller will be needed. We can reduce the likelihood
* of that unrecoverable error by checking whether the link is
* up, but we can't completely prevent it because the link may
* go down between the link-up check and the PIO request.
*/
if (!xilinx_pl_dma_pcie_link_up(port))
return false;
} else if (devfn > 0)
/* Only one device down on each root port */
return false;
return true;
}
static void __iomem *xilinx_pl_dma_pcie_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct pl_dma_pcie *port = bus->sysdata;
if (!xilinx_pl_dma_pcie_valid_device(bus, devfn))
return NULL;
return port->reg_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
}
/* PCIe operations */
static struct pci_ecam_ops xilinx_pl_dma_pcie_ops = {
.pci_ops = {
.map_bus = xilinx_pl_dma_pcie_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
}
};
static void xilinx_pl_dma_pcie_enable_msi(struct pl_dma_pcie *port)
{
phys_addr_t msi_addr = port->phys_reg_base;
pcie_write(port, upper_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE1);
pcie_write(port, lower_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE2);
}
static void xilinx_mask_intx_irq(struct irq_data *data)
{
struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data);
unsigned long flags;
u32 mask, val;
mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT);
raw_spin_lock_irqsave(&port->lock, flags);
val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK);
pcie_write(port, (val & (~mask)), XILINX_PCIE_DMA_REG_IDRN_MASK);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static void xilinx_unmask_intx_irq(struct irq_data *data)
{
struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data);
unsigned long flags;
u32 mask, val;
mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT);
raw_spin_lock_irqsave(&port->lock, flags);
val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK);
pcie_write(port, (val | mask), XILINX_PCIE_DMA_REG_IDRN_MASK);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static struct irq_chip xilinx_leg_irq_chip = {
.name = "pl_dma:INTx",
.irq_mask = xilinx_mask_intx_irq,
.irq_unmask = xilinx_unmask_intx_irq,
};
static int xilinx_pl_dma_pcie_intx_map(struct irq_domain *domain,
unsigned int irq, irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &xilinx_leg_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
irq_set_status_flags(irq, IRQ_LEVEL);
return 0;
}
/* INTx IRQ Domain operations */
static const struct irq_domain_ops intx_domain_ops = {
.map = xilinx_pl_dma_pcie_intx_map,
};
static irqreturn_t xilinx_pl_dma_pcie_msi_handler_high(int irq, void *args)
{
struct xilinx_msi *msi;
unsigned long status;
u32 bit, virq;
struct pl_dma_pcie *port = args;
msi = &port->msi;
while ((status = pcie_read(port, XILINX_PCIE_DMA_REG_MSI_HI)) != 0) {
for_each_set_bit(bit, &status, 32) {
pcie_write(port, 1 << bit, XILINX_PCIE_DMA_REG_MSI_HI);
bit = bit + 32;
virq = irq_find_mapping(msi->dev_domain, bit);
if (virq)
generic_handle_irq(virq);
}
}
return IRQ_HANDLED;
}
static irqreturn_t xilinx_pl_dma_pcie_msi_handler_low(int irq, void *args)
{
struct pl_dma_pcie *port = args;
struct xilinx_msi *msi;
unsigned long status;
u32 bit, virq;
msi = &port->msi;
while ((status = pcie_read(port, XILINX_PCIE_DMA_REG_MSI_LOW)) != 0) {
for_each_set_bit(bit, &status, 32) {
pcie_write(port, 1 << bit, XILINX_PCIE_DMA_REG_MSI_LOW);
virq = irq_find_mapping(msi->dev_domain, bit);
if (virq)
generic_handle_irq(virq);
}
}
return IRQ_HANDLED;
}
static irqreturn_t xilinx_pl_dma_pcie_event_flow(int irq, void *args)
{
struct pl_dma_pcie *port = args;
unsigned long val;
int i;
val = pcie_read(port, XILINX_PCIE_DMA_REG_IDR);
val &= pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
for_each_set_bit(i, &val, 32)
generic_handle_domain_irq(port->pldma_domain, i);
pcie_write(port, val, XILINX_PCIE_DMA_REG_IDR);
return IRQ_HANDLED;
}
#define _IC(x, s) \
[XILINX_PCIE_INTR_ ## x] = { __stringify(x), s }
static const struct {
const char *sym;
const char *str;
} intr_cause[32] = {
_IC(LINK_DOWN, "Link Down"),
_IC(HOT_RESET, "Hot reset"),
_IC(CFG_TIMEOUT, "ECAM access timeout"),
_IC(CORRECTABLE, "Correctable error message"),
_IC(NONFATAL, "Non fatal error message"),
_IC(FATAL, "Fatal error message"),
_IC(SLV_UNSUPP, "Slave unsupported request"),
_IC(SLV_UNEXP, "Slave unexpected completion"),
_IC(SLV_COMPL, "Slave completion timeout"),
_IC(SLV_ERRP, "Slave Error Poison"),
_IC(SLV_CMPABT, "Slave Completer Abort"),
_IC(SLV_ILLBUR, "Slave Illegal Burst"),
_IC(MST_DECERR, "Master decode error"),
_IC(MST_SLVERR, "Master slave error"),
};
static irqreturn_t xilinx_pl_dma_pcie_intr_handler(int irq, void *dev_id)
{
struct pl_dma_pcie *port = (struct pl_dma_pcie *)dev_id;
struct device *dev = port->dev;
struct irq_data *d;
d = irq_domain_get_irq_data(port->pldma_domain, irq);
switch (d->hwirq) {
case XILINX_PCIE_INTR_CORRECTABLE:
case XILINX_PCIE_INTR_NONFATAL:
case XILINX_PCIE_INTR_FATAL:
xilinx_pl_dma_pcie_clear_err_interrupts(port);
fallthrough;
default:
if (intr_cause[d->hwirq].str)
dev_warn(dev, "%s\n", intr_cause[d->hwirq].str);
else
dev_warn(dev, "Unknown IRQ %ld\n", d->hwirq);
}
return IRQ_HANDLED;
}
static struct irq_chip xilinx_msi_irq_chip = {
.name = "pl_dma:PCIe MSI",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
static struct msi_domain_info xilinx_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI),
.chip = &xilinx_msi_irq_chip,
};
static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
phys_addr_t msi_addr = pcie->phys_reg_base;
msg->address_lo = lower_32_bits(msi_addr);
msg->address_hi = upper_32_bits(msi_addr);
msg->data = data->hwirq;
}
static int xilinx_msi_set_affinity(struct irq_data *irq_data,
const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static struct irq_chip xilinx_irq_chip = {
.name = "pl_dma:MSI",
.irq_compose_msi_msg = xilinx_compose_msi_msg,
.irq_set_affinity = xilinx_msi_set_affinity,
};
static int xilinx_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct pl_dma_pcie *pcie = domain->host_data;
struct xilinx_msi *msi = &pcie->msi;
int bit, i;
mutex_lock(&msi->lock);
bit = bitmap_find_free_region(msi->bitmap, XILINX_NUM_MSI_IRQS,
get_count_order(nr_irqs));
if (bit < 0) {
mutex_unlock(&msi->lock);
return -ENOSPC;
}
for (i = 0; i < nr_irqs; i++) {
irq_domain_set_info(domain, virq + i, bit + i, &xilinx_irq_chip,
domain->host_data, handle_simple_irq,
NULL, NULL);
}
mutex_unlock(&msi->lock);
return 0;
}
static void xilinx_irq_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *data = irq_domain_get_irq_data(domain, virq);
struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
struct xilinx_msi *msi = &pcie->msi;
mutex_lock(&msi->lock);
bitmap_release_region(msi->bitmap, data->hwirq,
get_count_order(nr_irqs));
mutex_unlock(&msi->lock);
}
static const struct irq_domain_ops dev_msi_domain_ops = {
.alloc = xilinx_irq_domain_alloc,
.free = xilinx_irq_domain_free,
};
static void xilinx_pl_dma_pcie_free_irq_domains(struct pl_dma_pcie *port)
{
struct xilinx_msi *msi = &port->msi;
if (port->intx_domain) {
irq_domain_remove(port->intx_domain);
port->intx_domain = NULL;
}
if (msi->dev_domain) {
irq_domain_remove(msi->dev_domain);
msi->dev_domain = NULL;
}
if (msi->msi_domain) {
irq_domain_remove(msi->msi_domain);
msi->msi_domain = NULL;
}
}
static int xilinx_pl_dma_pcie_init_msi_irq_domain(struct pl_dma_pcie *port)
{
struct device *dev = port->dev;
struct xilinx_msi *msi = &port->msi;
int size = BITS_TO_LONGS(XILINX_NUM_MSI_IRQS) * sizeof(long);
struct fwnode_handle *fwnode = of_node_to_fwnode(port->dev->of_node);
msi->dev_domain = irq_domain_add_linear(NULL, XILINX_NUM_MSI_IRQS,
&dev_msi_domain_ops, port);
if (!msi->dev_domain)
goto out;
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&xilinx_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain)
goto out;
mutex_init(&msi->lock);
msi->bitmap = kzalloc(size, GFP_KERNEL);
if (!msi->bitmap)
goto out;
raw_spin_lock_init(&port->lock);
xilinx_pl_dma_pcie_enable_msi(port);
return 0;
out:
xilinx_pl_dma_pcie_free_irq_domains(port);
dev_err(dev, "Failed to allocate MSI IRQ domains\n");
return -ENOMEM;
}
/*
* INTx error interrupts are Xilinx controller specific interrupt, used to
* notify user about errors such as cfg timeout, slave unsupported requests,
* fatal and non fatal error etc.
*/
static irqreturn_t xilinx_pl_dma_pcie_intx_flow(int irq, void *args)
{
unsigned long val;
int i;
struct pl_dma_pcie *port = args;
val = FIELD_GET(XILINX_PCIE_DMA_IDRN_MASK,
pcie_read(port, XILINX_PCIE_DMA_REG_IDRN));
for_each_set_bit(i, &val, PCI_NUM_INTX)
generic_handle_domain_irq(port->intx_domain, i);
return IRQ_HANDLED;
}
static void xilinx_pl_dma_pcie_mask_event_irq(struct irq_data *d)
{
struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d);
u32 val;
raw_spin_lock(&port->lock);
val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
val &= ~BIT(d->hwirq);
pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR);
raw_spin_unlock(&port->lock);
}
static void xilinx_pl_dma_pcie_unmask_event_irq(struct irq_data *d)
{
struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d);
u32 val;
raw_spin_lock(&port->lock);
val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
val |= BIT(d->hwirq);
pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR);
raw_spin_unlock(&port->lock);
}
static struct irq_chip xilinx_pl_dma_pcie_event_irq_chip = {
.name = "pl_dma:RC-Event",
.irq_mask = xilinx_pl_dma_pcie_mask_event_irq,
.irq_unmask = xilinx_pl_dma_pcie_unmask_event_irq,
};
static int xilinx_pl_dma_pcie_event_map(struct irq_domain *domain,
unsigned int irq, irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &xilinx_pl_dma_pcie_event_irq_chip,
handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
irq_set_status_flags(irq, IRQ_LEVEL);
return 0;
}
static const struct irq_domain_ops event_domain_ops = {
.map = xilinx_pl_dma_pcie_event_map,
};
/**
* xilinx_pl_dma_pcie_init_irq_domain - Initialize IRQ domain
* @port: PCIe port information
*
* Return: '0' on success and error value on failure.
*/
static int xilinx_pl_dma_pcie_init_irq_domain(struct pl_dma_pcie *port)
{
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node;
int ret;
/* Setup INTx */
pcie_intc_node = of_get_child_by_name(node, "interrupt-controller");
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
return -EINVAL;
}
port->pldma_domain = irq_domain_add_linear(pcie_intc_node, 32,
&event_domain_ops, port);
if (!port->pldma_domain)
return -ENOMEM;
irq_domain_update_bus_token(port->pldma_domain, DOMAIN_BUS_NEXUS);
port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops, port);
if (!port->intx_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return PTR_ERR(port->intx_domain);
}
irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
ret = xilinx_pl_dma_pcie_init_msi_irq_domain(port);
if (ret != 0) {
irq_domain_remove(port->intx_domain);
return -ENOMEM;
}
of_node_put(pcie_intc_node);
raw_spin_lock_init(&port->lock);
return 0;
}
static int xilinx_pl_dma_pcie_setup_irq(struct pl_dma_pcie *port)
{
struct device *dev = port->dev;
struct platform_device *pdev = to_platform_device(dev);
int i, irq, err;
port->irq = platform_get_irq(pdev, 0);
if (port->irq < 0)
return port->irq;
for (i = 0; i < ARRAY_SIZE(intr_cause); i++) {
int err;
if (!intr_cause[i].str)
continue;
irq = irq_create_mapping(port->pldma_domain, i);
if (!irq) {
dev_err(dev, "Failed to map interrupt\n");
return -ENXIO;
}
err = devm_request_irq(dev, irq,
xilinx_pl_dma_pcie_intr_handler,
IRQF_SHARED | IRQF_NO_THREAD,
intr_cause[i].sym, port);
if (err) {
dev_err(dev, "Failed to request IRQ %d\n", irq);
return err;
}
}
port->intx_irq = irq_create_mapping(port->pldma_domain,
XILINX_PCIE_INTR_INTX);
if (!port->intx_irq) {
dev_err(dev, "Failed to map INTx interrupt\n");
return -ENXIO;
}
err = devm_request_irq(dev, port->intx_irq, xilinx_pl_dma_pcie_intx_flow,
IRQF_SHARED | IRQF_NO_THREAD, NULL, port);
if (err) {
dev_err(dev, "Failed to request INTx IRQ %d\n", irq);
return err;
}
err = devm_request_irq(dev, port->irq, xilinx_pl_dma_pcie_event_flow,
IRQF_SHARED | IRQF_NO_THREAD, NULL, port);
if (err) {
dev_err(dev, "Failed to request event IRQ %d\n", irq);
return err;
}
return 0;
}
static void xilinx_pl_dma_pcie_init_port(struct pl_dma_pcie *port)
{
if (xilinx_pl_dma_pcie_link_up(port))
dev_info(port->dev, "PCIe Link is UP\n");
else
dev_info(port->dev, "PCIe Link is DOWN\n");
/* Disable all interrupts */
pcie_write(port, ~XILINX_PCIE_DMA_IDR_ALL_MASK,
XILINX_PCIE_DMA_REG_IMR);
/* Clear pending interrupts */
pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_IDR) &
XILINX_PCIE_DMA_IMR_ALL_MASK,
XILINX_PCIE_DMA_REG_IDR);
/* Needed for MSI DECODE MODE */
pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK,
XILINX_PCIE_DMA_REG_MSI_LOW_MASK);
pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK,
XILINX_PCIE_DMA_REG_MSI_HI_MASK);
/* Set the Bridge enable bit */
pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_RPSC) |
XILINX_PCIE_DMA_REG_RPSC_BEN,
XILINX_PCIE_DMA_REG_RPSC);
}
static int xilinx_request_msi_irq(struct pl_dma_pcie *port)
{
struct device *dev = port->dev;
struct platform_device *pdev = to_platform_device(dev);
int ret;
port->msi.irq_msi0 = platform_get_irq_byname(pdev, "msi0");
if (port->msi.irq_msi0 <= 0) {
dev_err(dev, "Unable to find msi0 IRQ line\n");
return port->msi.irq_msi0;
}
ret = devm_request_irq(dev, port->msi.irq_msi0, xilinx_pl_dma_pcie_msi_handler_low,
IRQF_SHARED | IRQF_NO_THREAD, "xlnx-pcie-dma-pl",
port);
if (ret) {
dev_err(dev, "Failed to register interrupt\n");
return ret;
}
port->msi.irq_msi1 = platform_get_irq_byname(pdev, "msi1");
if (port->msi.irq_msi1 <= 0) {
dev_err(dev, "Unable to find msi1 IRQ line\n");
return port->msi.irq_msi1;
}
ret = devm_request_irq(dev, port->msi.irq_msi1, xilinx_pl_dma_pcie_msi_handler_high,
IRQF_SHARED | IRQF_NO_THREAD, "xlnx-pcie-dma-pl",
port);
if (ret) {
dev_err(dev, "Failed to register interrupt\n");
return ret;
}
return 0;
}
static int xilinx_pl_dma_pcie_parse_dt(struct pl_dma_pcie *port,
struct resource *bus_range)
{
struct device *dev = port->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res;
int err;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "Missing \"reg\" property\n");
return -ENXIO;
}
port->phys_reg_base = res->start;
port->cfg = pci_ecam_create(dev, res, bus_range, &xilinx_pl_dma_pcie_ops);
if (IS_ERR(port->cfg))
return PTR_ERR(port->cfg);
port->reg_base = port->cfg->win;
err = xilinx_request_msi_irq(port);
if (err) {
pci_ecam_free(port->cfg);
return err;
}
return 0;
}
static int xilinx_pl_dma_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct pl_dma_pcie *port;
struct pci_host_bridge *bridge;
struct resource_entry *bus;
int err;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port));
if (!bridge)
return -ENODEV;
port = pci_host_bridge_priv(bridge);
port->dev = dev;
bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
if (!bus)
return -ENODEV;
err = xilinx_pl_dma_pcie_parse_dt(port, bus->res);
if (err) {
dev_err(dev, "Parsing DT failed\n");
return err;
}
xilinx_pl_dma_pcie_init_port(port);
err = xilinx_pl_dma_pcie_init_irq_domain(port);
if (err)
goto err_irq_domain;
err = xilinx_pl_dma_pcie_setup_irq(port);
bridge->sysdata = port;
bridge->ops = &xilinx_pl_dma_pcie_ops.pci_ops;
err = pci_host_probe(bridge);
if (err < 0)
goto err_host_bridge;
return 0;
err_host_bridge:
xilinx_pl_dma_pcie_free_irq_domains(port);
err_irq_domain:
pci_ecam_free(port->cfg);
return err;
}
static const struct of_device_id xilinx_pl_dma_pcie_of_match[] = {
{
.compatible = "xlnx,xdma-host-3.00",
},
{}
};
static struct platform_driver xilinx_pl_dma_pcie_driver = {
.driver = {
.name = "xilinx-xdma-pcie",
.of_match_table = xilinx_pl_dma_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = xilinx_pl_dma_pcie_probe,
};
builtin_platform_driver(xilinx_pl_dma_pcie_driver);

View File

@ -126,7 +126,7 @@
#define E_ECAM_CR_ENABLE BIT(0) #define E_ECAM_CR_ENABLE BIT(0)
#define E_ECAM_SIZE_LOC GENMASK(20, 16) #define E_ECAM_SIZE_LOC GENMASK(20, 16)
#define E_ECAM_SIZE_SHIFT 16 #define E_ECAM_SIZE_SHIFT 16
#define NWL_ECAM_VALUE_DEFAULT 12 #define NWL_ECAM_MAX_SIZE 16
#define CFG_DMA_REG_BAR GENMASK(2, 0) #define CFG_DMA_REG_BAR GENMASK(2, 0)
#define CFG_PCIE_CACHE GENMASK(7, 0) #define CFG_PCIE_CACHE GENMASK(7, 0)
@ -165,8 +165,6 @@ struct nwl_pcie {
u32 ecam_size; u32 ecam_size;
int irq_intx; int irq_intx;
int irq_misc; int irq_misc;
u32 ecam_value;
u8 last_busno;
struct nwl_msi msi; struct nwl_msi msi;
struct irq_domain *legacy_irq_domain; struct irq_domain *legacy_irq_domain;
struct clk *clk; struct clk *clk;
@ -625,7 +623,7 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
{ {
struct device *dev = pcie->dev; struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
u32 breg_val, ecam_val, first_busno = 0; u32 breg_val, ecam_val;
int err; int err;
breg_val = nwl_bridge_readl(pcie, E_BREG_CAPABILITIES) & BREG_PRESENT; breg_val = nwl_bridge_readl(pcie, E_BREG_CAPABILITIES) & BREG_PRESENT;
@ -675,7 +673,7 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
E_ECAM_CR_ENABLE, E_ECAM_CONTROL); E_ECAM_CR_ENABLE, E_ECAM_CONTROL);
nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) |
(pcie->ecam_value << E_ECAM_SIZE_SHIFT), (NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT),
E_ECAM_CONTROL); E_ECAM_CONTROL);
nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base),
@ -683,15 +681,6 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
nwl_bridge_writel(pcie, upper_32_bits(pcie->phys_ecam_base), nwl_bridge_writel(pcie, upper_32_bits(pcie->phys_ecam_base),
E_ECAM_BASE_HI); E_ECAM_BASE_HI);
/* Get bus range */
ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL);
pcie->last_busno = (ecam_val & E_ECAM_SIZE_LOC) >> E_ECAM_SIZE_SHIFT;
/* Write primary, secondary and subordinate bus numbers */
ecam_val = first_busno;
ecam_val |= (first_busno + 1) << 8;
ecam_val |= (pcie->last_busno << E_ECAM_SIZE_SHIFT);
writel(ecam_val, (pcie->ecam_base + PCI_PRIMARY_BUS));
if (nwl_pcie_link_up(pcie)) if (nwl_pcie_link_up(pcie))
dev_info(dev, "Link is UP\n"); dev_info(dev, "Link is UP\n");
else else
@ -792,7 +781,6 @@ static int nwl_pcie_probe(struct platform_device *pdev)
pcie = pci_host_bridge_priv(bridge); pcie = pci_host_bridge_priv(bridge);
pcie->dev = dev; pcie->dev = dev;
pcie->ecam_value = NWL_ECAM_VALUE_DEFAULT;
err = nwl_pcie_parse_dt(pcie, pdev); err = nwl_pcie_parse_dt(pcie, pdev);
if (err) { if (err) {

View File

@ -525,10 +525,9 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, 0), 0); PCI_DEVFN(dev, 0), 0);
hdr_type = readb(base + PCI_HEADER_TYPE) & hdr_type = readb(base + PCI_HEADER_TYPE);
PCI_HEADER_TYPE_MASK;
functions = (hdr_type & 0x80) ? 8 : 1; functions = (hdr_type & PCI_HEADER_TYPE_MFD) ? 8 : 1;
for (fn = 0; fn < functions; fn++) { for (fn = 0; fn < functions; fn++) {
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, fn), 0); PCI_DEVFN(dev, fn), 0);
@ -1078,10 +1077,7 @@ static int vmd_resume(struct device *dev)
struct vmd_dev *vmd = pci_get_drvdata(pdev); struct vmd_dev *vmd = pci_get_drvdata(pdev);
int err, i; int err, i;
if (vmd->irq_domain) vmd_set_msi_remapping(vmd, !!vmd->irq_domain);
vmd_set_msi_remapping(vmd, true);
else
vmd_set_msi_remapping(vmd, false);
for (i = 0; i < vmd->msix_count; i++) { for (i = 0; i < vmd->msix_count; i++) {
err = devm_request_irq(dev, vmd->irqs[i].virq, err = devm_request_irq(dev, vmd->irqs[i].virq,

View File

@ -38,7 +38,7 @@ static int devm_pci_epc_match(struct device *dev, void *res, void *match_data)
*/ */
void pci_epc_put(struct pci_epc *epc) void pci_epc_put(struct pci_epc *epc)
{ {
if (!epc || IS_ERR(epc)) if (IS_ERR_OR_NULL(epc))
return; return;
module_put(epc->ops->owner); module_put(epc->ops->owner);
@ -660,7 +660,7 @@ void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
struct list_head *list; struct list_head *list;
u32 func_no = 0; u32 func_no = 0;
if (!epc || IS_ERR(epc) || !epf) if (IS_ERR_OR_NULL(epc) || !epf)
return; return;
if (type == PRIMARY_INTERFACE) { if (type == PRIMARY_INTERFACE) {
@ -691,7 +691,7 @@ void pci_epc_linkup(struct pci_epc *epc)
{ {
struct pci_epf *epf; struct pci_epf *epf;
if (!epc || IS_ERR(epc)) if (IS_ERR_OR_NULL(epc))
return; return;
mutex_lock(&epc->list_lock); mutex_lock(&epc->list_lock);
@ -717,7 +717,7 @@ void pci_epc_linkdown(struct pci_epc *epc)
{ {
struct pci_epf *epf; struct pci_epf *epf;
if (!epc || IS_ERR(epc)) if (IS_ERR_OR_NULL(epc))
return; return;
mutex_lock(&epc->list_lock); mutex_lock(&epc->list_lock);
@ -743,7 +743,7 @@ void pci_epc_init_notify(struct pci_epc *epc)
{ {
struct pci_epf *epf; struct pci_epf *epf;
if (!epc || IS_ERR(epc)) if (IS_ERR_OR_NULL(epc))
return; return;
mutex_lock(&epc->list_lock); mutex_lock(&epc->list_lock);
@ -769,7 +769,7 @@ void pci_epc_bme_notify(struct pci_epc *epc)
{ {
struct pci_epf *epf; struct pci_epf *epf;
if (!epc || IS_ERR(epc)) if (IS_ERR_OR_NULL(epc))
return; return;
mutex_lock(&epc->list_lock); mutex_lock(&epc->list_lock);
@ -869,7 +869,6 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
put_dev: put_dev:
put_device(&epc->dev); put_device(&epc->dev);
kfree(epc);
err_ret: err_ret:
return ERR_PTR(ret); return ERR_PTR(ret);

View File

@ -61,6 +61,18 @@ config HOTPLUG_PCI_ACPI
When in doubt, say N. When in doubt, say N.
config HOTPLUG_PCI_ACPI_AMPERE_ALTRA
tristate "ACPI PCI Hotplug driver Ampere Altra extensions"
depends on HOTPLUG_PCI_ACPI
depends on HAVE_ARM_SMCCC_DISCOVERY
help
Say Y here if you have an Ampere Altra system.
To compile this driver as a module, choose M here: the
module will be called acpiphp_ampere_altra.
When in doubt, say N.
config HOTPLUG_PCI_ACPI_IBM config HOTPLUG_PCI_ACPI_IBM
tristate "ACPI PCI Hotplug driver IBM extensions" tristate "ACPI PCI Hotplug driver IBM extensions"
depends on HOTPLUG_PCI_ACPI depends on HOTPLUG_PCI_ACPI

View File

@ -23,6 +23,7 @@ obj-$(CONFIG_HOTPLUG_PCI_S390) += s390_pci_hpc.o
# acpiphp_ibm extends acpiphp, so should be linked afterwards. # acpiphp_ibm extends acpiphp, so should be linked afterwards.
obj-$(CONFIG_HOTPLUG_PCI_ACPI_AMPERE_ALTRA) += acpiphp_ampere_altra.o
obj-$(CONFIG_HOTPLUG_PCI_ACPI_IBM) += acpiphp_ibm.o obj-$(CONFIG_HOTPLUG_PCI_ACPI_IBM) += acpiphp_ibm.o
pci_hotplug-objs := pci_hotplug_core.o pci_hotplug-objs := pci_hotplug_core.o

View File

@ -0,0 +1,127 @@
// SPDX-License-Identifier: GPL-2.0
/*
* ACPI PCI Hot Plug Extension for Ampere Altra. Allows control of
* attention LEDs via requests to system firmware.
*
* Copyright (C) 2023 Ampere Computing LLC
*/
#define pr_fmt(fmt) "acpiphp_ampere_altra: " fmt
#include <linux/init.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
#include <linux/platform_device.h>
#include "acpiphp.h"
#define HANDLE_OPEN 0xb0200000
#define HANDLE_CLOSE 0xb0300000
#define REQUEST 0xf0700000
#define LED_CMD 0x00000004
#define LED_ATTENTION 0x00000002
#define LED_SET_ON 0x00000001
#define LED_SET_OFF 0x00000002
#define LED_SET_BLINK 0x00000003
static u32 led_service_id[4];
static int led_status(u8 status)
{
switch (status) {
case 1: return LED_SET_ON;
case 2: return LED_SET_BLINK;
default: return LED_SET_OFF;
}
}
static int set_attention_status(struct hotplug_slot *slot, u8 status)
{
struct arm_smccc_res res;
struct pci_bus *bus;
struct pci_dev *root_port;
unsigned long flags;
u32 handle;
int ret = 0;
bus = slot->pci_slot->bus;
root_port = pcie_find_root_port(bus->self);
if (!root_port)
return -ENODEV;
local_irq_save(flags);
arm_smccc_smc(HANDLE_OPEN, led_service_id[0], led_service_id[1],
led_service_id[2], led_service_id[3], 0, 0, 0, &res);
if (res.a0) {
ret = -ENODEV;
goto out;
}
handle = res.a1 & 0xffff0000;
arm_smccc_smc(REQUEST, LED_CMD, led_status(status), LED_ATTENTION,
(PCI_SLOT(root_port->devfn) << 4) | (pci_domain_nr(bus) & 0xf),
0, 0, handle, &res);
if (res.a0)
ret = -ENODEV;
arm_smccc_smc(HANDLE_CLOSE, handle, 0, 0, 0, 0, 0, 0, &res);
out:
local_irq_restore(flags);
return ret;
}
static int get_attention_status(struct hotplug_slot *slot, u8 *status)
{
return -EINVAL;
}
static struct acpiphp_attention_info ampere_altra_attn = {
.set_attn = set_attention_status,
.get_attn = get_attention_status,
.owner = THIS_MODULE,
};
static int altra_led_probe(struct platform_device *pdev)
{
struct fwnode_handle *fwnode = dev_fwnode(&pdev->dev);
int ret;
ret = fwnode_property_read_u32_array(fwnode, "uuid", led_service_id, 4);
if (ret) {
dev_err(&pdev->dev, "can't find uuid\n");
return ret;
}
ret = acpiphp_register_attention(&ampere_altra_attn);
if (ret) {
dev_err(&pdev->dev, "can't register driver\n");
return ret;
}
return 0;
}
static void altra_led_remove(struct platform_device *pdev)
{
acpiphp_unregister_attention(&ampere_altra_attn);
}
static const struct acpi_device_id altra_led_ids[] = {
{ "AMPC0008", 0 },
{ }
};
MODULE_DEVICE_TABLE(acpi, altra_led_ids);
static struct platform_driver altra_led_driver = {
.driver = {
.name = "ampere-altra-leds",
.acpi_match_table = altra_led_ids,
},
.probe = altra_led_probe,
.remove_new = altra_led_remove,
};
module_platform_driver(altra_led_driver);
MODULE_AUTHOR("D Scott Phillips <scott@os.amperecomputing.com>");
MODULE_LICENSE("GPL");

View File

@ -78,8 +78,7 @@ int acpiphp_register_attention(struct acpiphp_attention_info *info)
{ {
int retval = -EINVAL; int retval = -EINVAL;
if (info && info->owner && info->set_attn && if (info && info->set_attn && info->get_attn && !attention_info) {
info->get_attn && !attention_info) {
retval = 0; retval = 0;
attention_info = info; attention_info = info;
} }

View File

@ -2059,7 +2059,7 @@ int cpqhp_process_SS(struct controller *ctrl, struct pci_func *func)
return rc; return rc;
/* If it's a bridge, check the VGA Enable bit */ /* If it's a bridge, check the VGA Enable bit */
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
rc = pci_bus_read_config_byte(pci_bus, devfn, PCI_BRIDGE_CONTROL, &BCR); rc = pci_bus_read_config_byte(pci_bus, devfn, PCI_BRIDGE_CONTROL, &BCR);
if (rc) if (rc)
return rc; return rc;
@ -2342,7 +2342,7 @@ static int configure_new_function(struct controller *ctrl, struct pci_func *func
if (rc) if (rc)
return rc; return rc;
if ((temp_byte & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((temp_byte & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
/* set Primary bus */ /* set Primary bus */
dbg("set Primary bus = %d\n", func->bus); dbg("set Primary bus = %d\n", func->bus);
rc = pci_bus_write_config_byte(pci_bus, devfn, PCI_PRIMARY_BUS, func->bus); rc = pci_bus_write_config_byte(pci_bus, devfn, PCI_PRIMARY_BUS, func->bus);
@ -2739,7 +2739,7 @@ static int configure_new_function(struct controller *ctrl, struct pci_func *func
* PCI_BRIDGE_CTL_SERR | * PCI_BRIDGE_CTL_SERR |
* PCI_BRIDGE_CTL_NO_ISA */ * PCI_BRIDGE_CTL_NO_ISA */
rc = pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command); rc = pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command);
} else if ((temp_byte & 0x7F) == PCI_HEADER_TYPE_NORMAL) { } else if ((temp_byte & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) {
/* Standard device */ /* Standard device */
rc = pci_bus_read_config_byte(pci_bus, devfn, 0x0B, &class_code); rc = pci_bus_read_config_byte(pci_bus, devfn, 0x0B, &class_code);

View File

@ -363,7 +363,7 @@ int cpqhp_save_config(struct controller *ctrl, int busnumber, int is_hot_plug)
return rc; return rc;
/* If multi-function device, set max_functions to 8 */ /* If multi-function device, set max_functions to 8 */
if (header_type & 0x80) if (header_type & PCI_HEADER_TYPE_MFD)
max_functions = 8; max_functions = 8;
else else
max_functions = 1; max_functions = 1;
@ -372,7 +372,7 @@ int cpqhp_save_config(struct controller *ctrl, int busnumber, int is_hot_plug)
do { do {
DevError = 0; DevError = 0;
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
/* Recurse the subordinate bus /* Recurse the subordinate bus
* get the subordinate bus number * get the subordinate bus number
*/ */
@ -487,13 +487,13 @@ int cpqhp_save_slot_config(struct controller *ctrl, struct pci_func *new_slot)
pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), 0x0B, &class_code); pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), 0x0B, &class_code);
pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), PCI_HEADER_TYPE, &header_type); pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), PCI_HEADER_TYPE, &header_type);
if (header_type & 0x80) /* Multi-function device */ if (header_type & PCI_HEADER_TYPE_MFD)
max_functions = 8; max_functions = 8;
else else
max_functions = 1; max_functions = 1;
while (function < max_functions) { while (function < max_functions) {
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
/* Recurse the subordinate bus */ /* Recurse the subordinate bus */
pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), PCI_SECONDARY_BUS, &secondary_bus); pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), PCI_SECONDARY_BUS, &secondary_bus);
@ -571,7 +571,7 @@ int cpqhp_save_base_addr_length(struct controller *ctrl, struct pci_func *func)
/* Check for Bridge */ /* Check for Bridge */
pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type);
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus);
sub_bus = (int) secondary_bus; sub_bus = (int) secondary_bus;
@ -625,7 +625,7 @@ int cpqhp_save_base_addr_length(struct controller *ctrl, struct pci_func *func)
} /* End of base register loop */ } /* End of base register loop */
} else if ((header_type & 0x7F) == 0x00) { } else if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) {
/* Figure out IO and memory base lengths */ /* Figure out IO and memory base lengths */
for (cloop = 0x10; cloop <= 0x24; cloop += 4) { for (cloop = 0x10; cloop <= 0x24; cloop += 4) {
temp_register = 0xFFFFFFFF; temp_register = 0xFFFFFFFF;
@ -723,7 +723,7 @@ int cpqhp_save_used_resources(struct controller *ctrl, struct pci_func *func)
/* Check for Bridge */ /* Check for Bridge */
pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type);
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
/* Clear Bridge Control Register */ /* Clear Bridge Control Register */
command = 0x00; command = 0x00;
pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command); pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command);
@ -858,7 +858,7 @@ int cpqhp_save_used_resources(struct controller *ctrl, struct pci_func *func)
} }
} /* End of base register loop */ } /* End of base register loop */
/* Standard header */ /* Standard header */
} else if ((header_type & 0x7F) == 0x00) { } else if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) {
/* Figure out IO and memory base lengths */ /* Figure out IO and memory base lengths */
for (cloop = 0x10; cloop <= 0x24; cloop += 4) { for (cloop = 0x10; cloop <= 0x24; cloop += 4) {
pci_bus_read_config_dword(pci_bus, devfn, cloop, &save_base); pci_bus_read_config_dword(pci_bus, devfn, cloop, &save_base);
@ -975,7 +975,7 @@ int cpqhp_configure_board(struct controller *ctrl, struct pci_func *func)
pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type);
/* If this is a bridge device, restore subordinate devices */ /* If this is a bridge device, restore subordinate devices */
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus);
sub_bus = (int) secondary_bus; sub_bus = (int) secondary_bus;
@ -1067,7 +1067,7 @@ int cpqhp_valid_replace(struct controller *ctrl, struct pci_func *func)
/* Check for Bridge */ /* Check for Bridge */
pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type);
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
/* In order to continue checking, we must program the /* In order to continue checking, we must program the
* bus registers in the bridge to respond to accesses * bus registers in the bridge to respond to accesses
* for its subordinate bus(es) * for its subordinate bus(es)
@ -1090,7 +1090,7 @@ int cpqhp_valid_replace(struct controller *ctrl, struct pci_func *func)
} }
/* Check to see if it is a standard config header */ /* Check to see if it is a standard config header */
else if ((header_type & 0x7F) == PCI_HEADER_TYPE_NORMAL) { else if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) {
/* Check subsystem vendor and ID */ /* Check subsystem vendor and ID */
pci_bus_read_config_dword(pci_bus, devfn, PCI_SUBSYSTEM_VENDOR_ID, &temp_register); pci_bus_read_config_dword(pci_bus, devfn, PCI_SUBSYSTEM_VENDOR_ID, &temp_register);

View File

@ -17,6 +17,7 @@
*/ */
#include <linux/pci_hotplug.h> #include <linux/pci_hotplug.h>
#include <linux/pci_regs.h>
extern int ibmphp_debug; extern int ibmphp_debug;
@ -286,8 +287,8 @@ int ibmphp_register_pci(void);
/* pci specific defines */ /* pci specific defines */
#define PCI_VENDOR_ID_NOTVALID 0xFFFF #define PCI_VENDOR_ID_NOTVALID 0xFFFF
#define PCI_HEADER_TYPE_MULTIDEVICE 0x80 #define PCI_HEADER_TYPE_MULTIDEVICE (PCI_HEADER_TYPE_MFD|PCI_HEADER_TYPE_NORMAL)
#define PCI_HEADER_TYPE_MULTIBRIDGE 0x81 #define PCI_HEADER_TYPE_MULTIBRIDGE (PCI_HEADER_TYPE_MFD|PCI_HEADER_TYPE_BRIDGE)
#define LATENCY 0x64 #define LATENCY 0x64
#define CACHE 64 #define CACHE 64

View File

@ -1087,7 +1087,7 @@ static struct res_needed *scan_behind_bridge(struct pci_func *func, u8 busno)
pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class);
debug("hdr_type behind the bridge is %x\n", hdr_type); debug("hdr_type behind the bridge is %x\n", hdr_type);
if ((hdr_type & 0x7f) == PCI_HEADER_TYPE_BRIDGE) { if ((hdr_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
err("embedded bridges not supported for hot-plugging.\n"); err("embedded bridges not supported for hot-plugging.\n");
amount->not_correct = 1; amount->not_correct = 1;
return amount; return amount;

View File

@ -20,6 +20,7 @@
#define pr_fmt(fmt) "pciehp: " fmt #define pr_fmt(fmt) "pciehp: " fmt
#define dev_fmt pr_fmt #define dev_fmt pr_fmt
#include <linux/bitfield.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/slab.h> #include <linux/slab.h>
@ -103,7 +104,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
struct pci_dev *pdev = ctrl->pcie->port; struct pci_dev *pdev = ctrl->pcie->port;
if (status) if (status)
status <<= PCI_EXP_SLTCTL_ATTN_IND_SHIFT; status = FIELD_PREP(PCI_EXP_SLTCTL_AIC, status);
else else
status = PCI_EXP_SLTCTL_ATTN_IND_OFF; status = PCI_EXP_SLTCTL_ATTN_IND_OFF;

View File

@ -14,6 +14,7 @@
#define dev_fmt(fmt) "pciehp: " fmt #define dev_fmt(fmt) "pciehp: " fmt
#include <linux/bitfield.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
@ -484,7 +485,7 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
struct pci_dev *pdev = ctrl_dev(ctrl); struct pci_dev *pdev = ctrl_dev(ctrl);
pci_config_pm_runtime_get(pdev); pci_config_pm_runtime_get(pdev);
pcie_write_cmd_nowait(ctrl, status << 6, pcie_write_cmd_nowait(ctrl, FIELD_PREP(PCI_EXP_SLTCTL_AIC, status),
PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC); PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC);
pci_config_pm_runtime_put(pdev); pci_config_pm_runtime_put(pdev);
return 0; return 0;
@ -1028,7 +1029,7 @@ struct controller *pcie_init(struct pcie_device *dev)
PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC); PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC);
ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c IbPresDis%c LLActRep%c%s\n", ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c IbPresDis%c LLActRep%c%s\n",
(slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, FIELD_GET(PCI_EXP_SLTCAP_PSN, slot_cap),
FLAG(slot_cap, PCI_EXP_SLTCAP_ABP), FLAG(slot_cap, PCI_EXP_SLTCAP_ABP),
FLAG(slot_cap, PCI_EXP_SLTCAP_PCP), FLAG(slot_cap, PCI_EXP_SLTCAP_PCP),
FLAG(slot_cap, PCI_EXP_SLTCAP_MRLSP), FLAG(slot_cap, PCI_EXP_SLTCAP_MRLSP),

View File

@ -5,6 +5,7 @@
* Copyright Gavin Shan, IBM Corporation 2016. * Copyright Gavin Shan, IBM Corporation 2016.
*/ */
#include <linux/bitfield.h>
#include <linux/libfdt.h> #include <linux/libfdt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pci.h> #include <linux/pci.h>
@ -731,7 +732,7 @@ static int pnv_php_enable_msix(struct pnv_php_slot *php_slot)
/* Check hotplug MSIx entry is in range */ /* Check hotplug MSIx entry is in range */
pcie_capability_read_word(pdev, PCI_EXP_FLAGS, &pcie_flag); pcie_capability_read_word(pdev, PCI_EXP_FLAGS, &pcie_flag);
entry.entry = (pcie_flag & PCI_EXP_FLAGS_IRQ) >> 9; entry.entry = FIELD_GET(PCI_EXP_FLAGS_IRQ, pcie_flag);
if (entry.entry >= nr_entries) if (entry.entry >= nr_entries)
return -ERANGE; return -ERANGE;

View File

@ -6,6 +6,7 @@
* Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
* Copyright (C) 2016 Christoph Hellwig. * Copyright (C) 2016 Christoph Hellwig.
*/ */
#include <linux/bitfield.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/irq.h> #include <linux/irq.h>
@ -188,7 +189,7 @@ static inline void pci_write_msg_msi(struct pci_dev *dev, struct msi_desc *desc,
pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
msgctl &= ~PCI_MSI_FLAGS_QSIZE; msgctl &= ~PCI_MSI_FLAGS_QSIZE;
msgctl |= desc->pci.msi_attrib.multiple << 4; msgctl |= FIELD_PREP(PCI_MSI_FLAGS_QSIZE, desc->pci.msi_attrib.multiple);
pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl); pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl);
pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, msg->address_lo); pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, msg->address_lo);
@ -299,7 +300,7 @@ static int msi_setup_msi_desc(struct pci_dev *dev, int nvec,
desc.pci.msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT); desc.pci.msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT);
desc.pci.msi_attrib.can_mask = !!(control & PCI_MSI_FLAGS_MASKBIT); desc.pci.msi_attrib.can_mask = !!(control & PCI_MSI_FLAGS_MASKBIT);
desc.pci.msi_attrib.default_irq = dev->irq; desc.pci.msi_attrib.default_irq = dev->irq;
desc.pci.msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; desc.pci.msi_attrib.multi_cap = FIELD_GET(PCI_MSI_FLAGS_QMASK, control);
desc.pci.msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); desc.pci.msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec));
desc.affinity = masks; desc.affinity = masks;
@ -478,7 +479,7 @@ int pci_msi_vec_count(struct pci_dev *dev)
return -EINVAL; return -EINVAL;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); ret = 1 << FIELD_GET(PCI_MSI_FLAGS_QMASK, msgctl);
return ret; return ret;
} }
@ -511,7 +512,8 @@ void __pci_restore_msi_state(struct pci_dev *dev)
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
pci_msi_update_mask(entry, 0, 0); pci_msi_update_mask(entry, 0, 0);
control &= ~PCI_MSI_FLAGS_QSIZE; control &= ~PCI_MSI_FLAGS_QSIZE;
control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; control |= PCI_MSI_FLAGS_ENABLE |
FIELD_PREP(PCI_MSI_FLAGS_QSIZE, entry->pci.msi_attrib.multiple);
pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control);
} }

View File

@ -28,9 +28,9 @@ struct pci_p2pdma {
}; };
struct pci_p2pdma_pagemap { struct pci_p2pdma_pagemap {
struct dev_pagemap pgmap;
struct pci_dev *provider; struct pci_dev *provider;
u64 bus_offset; u64 bus_offset;
struct dev_pagemap pgmap;
}; };
static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap) static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap)
@ -837,7 +837,6 @@ void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size)
if (unlikely(!percpu_ref_tryget_live_rcu(ref))) { if (unlikely(!percpu_ref_tryget_live_rcu(ref))) {
gen_pool_free(p2pdma->pool, (unsigned long) ret, size); gen_pool_free(p2pdma->pool, (unsigned long) ret, size);
ret = NULL; ret = NULL;
goto out;
} }
out: out:
rcu_read_unlock(); rcu_read_unlock();

View File

@ -911,7 +911,7 @@ pci_power_t acpi_pci_choose_state(struct pci_dev *pdev)
{ {
int acpi_state, d_max; int acpi_state, d_max;
if (pdev->no_d3cold) if (pdev->no_d3cold || !pdev->d3cold_allowed)
d_max = ACPI_STATE_D3_HOT; d_max = ACPI_STATE_D3_HOT;
else else
d_max = ACPI_STATE_D3_COLD; d_max = ACPI_STATE_D3_COLD;
@ -1215,12 +1215,12 @@ void acpi_pci_add_bus(struct pci_bus *bus)
if (!pci_is_root_bus(bus)) if (!pci_is_root_bus(bus))
return; return;
obj = acpi_evaluate_dsm(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 3, obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 3,
DSM_PCI_POWER_ON_RESET_DELAY, NULL); DSM_PCI_POWER_ON_RESET_DELAY, NULL, ACPI_TYPE_INTEGER);
if (!obj) if (!obj)
return; return;
if (obj->type == ACPI_TYPE_INTEGER && obj->integer.value == 1) { if (obj->integer.value == 1) {
bridge = pci_find_host_bridge(bus); bridge = pci_find_host_bridge(bus);
bridge->ignore_reset_delay = 1; bridge->ignore_reset_delay = 1;
} }
@ -1376,12 +1376,13 @@ static void pci_acpi_optimize_delay(struct pci_dev *pdev,
if (bridge->ignore_reset_delay) if (bridge->ignore_reset_delay)
pdev->d3cold_delay = 0; pdev->d3cold_delay = 0;
obj = acpi_evaluate_dsm(handle, &pci_acpi_dsm_guid, 3, obj = acpi_evaluate_dsm_typed(handle, &pci_acpi_dsm_guid, 3,
DSM_PCI_DEVICE_READINESS_DURATIONS, NULL); DSM_PCI_DEVICE_READINESS_DURATIONS, NULL,
ACPI_TYPE_PACKAGE);
if (!obj) if (!obj)
return; return;
if (obj->type == ACPI_TYPE_PACKAGE && obj->package.count == 5) { if (obj->package.count == 5) {
elements = obj->package.elements; elements = obj->package.elements;
if (elements[0].type == ACPI_TYPE_INTEGER) { if (elements[0].type == ACPI_TYPE_INTEGER) {
value = (int)elements[0].integer.value / 1000; value = (int)elements[0].integer.value / 1000;

View File

@ -12,7 +12,7 @@
* Modeled after usb's driverfs.c * Modeled after usb's driverfs.c
*/ */
#include <linux/bitfield.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/pci.h> #include <linux/pci.h>
@ -230,8 +230,7 @@ static ssize_t current_link_width_show(struct device *dev,
if (err) if (err)
return -EINVAL; return -EINVAL;
return sysfs_emit(buf, "%u\n", return sysfs_emit(buf, "%u\n", FIELD_GET(PCI_EXP_LNKSTA_NLW, linkstat));
(linkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT);
} }
static DEVICE_ATTR_RO(current_link_width); static DEVICE_ATTR_RO(current_link_width);
@ -530,10 +529,7 @@ static ssize_t d3cold_allowed_store(struct device *dev,
return -EINVAL; return -EINVAL;
pdev->d3cold_allowed = !!val; pdev->d3cold_allowed = !!val;
if (pdev->d3cold_allowed) pci_bridge_d3_update(pdev);
pci_d3cold_enable(pdev);
else
pci_d3cold_disable(pdev);
pm_runtime_resume(dev); pm_runtime_resume(dev);
@ -1551,11 +1547,10 @@ static umode_t pci_dev_attrs_are_visible(struct kobject *kobj,
struct device *dev = kobj_to_dev(kobj); struct device *dev = kobj_to_dev(kobj);
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
if (a == &dev_attr_boot_vga.attr) if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev))
if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
return 0;
return a->mode; return a->mode;
return 0;
} }
static struct attribute *pci_dev_hp_attrs[] = { static struct attribute *pci_dev_hp_attrs[] = {

View File

@ -534,7 +534,7 @@ u8 pci_bus_find_capability(struct pci_bus *bus, unsigned int devfn, int cap)
pci_bus_read_config_byte(bus, devfn, PCI_HEADER_TYPE, &hdr_type); pci_bus_read_config_byte(bus, devfn, PCI_HEADER_TYPE, &hdr_type);
pos = __pci_bus_find_cap_start(bus, devfn, hdr_type & 0x7f); pos = __pci_bus_find_cap_start(bus, devfn, hdr_type & PCI_HEADER_TYPE_MASK);
if (pos) if (pos)
pos = __pci_find_next_cap(bus, devfn, pos, cap); pos = __pci_find_next_cap(bus, devfn, pos, cap);
@ -732,15 +732,18 @@ u16 pci_find_vsec_capability(struct pci_dev *dev, u16 vendor, int cap)
{ {
u16 vsec = 0; u16 vsec = 0;
u32 header; u32 header;
int ret;
if (vendor != dev->vendor) if (vendor != dev->vendor)
return 0; return 0;
while ((vsec = pci_find_next_ext_capability(dev, vsec, while ((vsec = pci_find_next_ext_capability(dev, vsec,
PCI_EXT_CAP_ID_VNDR))) { PCI_EXT_CAP_ID_VNDR))) {
if (pci_read_config_dword(dev, vsec + PCI_VNDR_HEADER, ret = pci_read_config_dword(dev, vsec + PCI_VNDR_HEADER, &header);
&header) == PCIBIOS_SUCCESSFUL && if (ret != PCIBIOS_SUCCESSFUL)
PCI_VNDR_HEADER_ID(header) == cap) continue;
if (PCI_VNDR_HEADER_ID(header) == cap)
return vsec; return vsec;
} }
@ -1775,8 +1778,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
return; return;
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >> nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, ctrl);
PCI_REBAR_CTRL_NBAR_SHIFT;
for (i = 0; i < nbars; i++, pos += 8) { for (i = 0; i < nbars; i++, pos += 8) {
struct resource *res; struct resource *res;
@ -1787,7 +1789,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
res = pdev->resource + bar_idx; res = pdev->resource + bar_idx;
size = pci_rebar_bytes_to_size(resource_size(res)); size = pci_rebar_bytes_to_size(resource_size(res));
ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; ctrl |= FIELD_PREP(PCI_REBAR_CTRL_BAR_SIZE, size);
pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
} }
} }
@ -3228,7 +3230,7 @@ void pci_pm_init(struct pci_dev *dev)
(pmc & PCI_PM_CAP_PME_D2) ? " D2" : "", (pmc & PCI_PM_CAP_PME_D2) ? " D2" : "",
(pmc & PCI_PM_CAP_PME_D3hot) ? " D3hot" : "", (pmc & PCI_PM_CAP_PME_D3hot) ? " D3hot" : "",
(pmc & PCI_PM_CAP_PME_D3cold) ? " D3cold" : ""); (pmc & PCI_PM_CAP_PME_D3cold) ? " D3cold" : "");
dev->pme_support = pmc >> PCI_PM_CAP_PME_SHIFT; dev->pme_support = FIELD_GET(PCI_PM_CAP_PME_MASK, pmc);
dev->pme_poll = true; dev->pme_poll = true;
/* /*
* Make device's PM flags reflect the wake-up capability, but * Make device's PM flags reflect the wake-up capability, but
@ -3299,20 +3301,20 @@ static int pci_ea_read(struct pci_dev *dev, int offset)
ent_offset += 4; ent_offset += 4;
/* Entry size field indicates DWORDs after 1st */ /* Entry size field indicates DWORDs after 1st */
ent_size = ((dw0 & PCI_EA_ES) + 1) << 2; ent_size = (FIELD_GET(PCI_EA_ES, dw0) + 1) << 2;
if (!(dw0 & PCI_EA_ENABLE)) /* Entry not enabled */ if (!(dw0 & PCI_EA_ENABLE)) /* Entry not enabled */
goto out; goto out;
bei = (dw0 & PCI_EA_BEI) >> 4; bei = FIELD_GET(PCI_EA_BEI, dw0);
prop = (dw0 & PCI_EA_PP) >> 8; prop = FIELD_GET(PCI_EA_PP, dw0);
/* /*
* If the Property is in the reserved range, try the Secondary * If the Property is in the reserved range, try the Secondary
* Property instead. * Property instead.
*/ */
if (prop > PCI_EA_P_BRIDGE_IO && prop < PCI_EA_P_MEM_RESERVED) if (prop > PCI_EA_P_BRIDGE_IO && prop < PCI_EA_P_MEM_RESERVED)
prop = (dw0 & PCI_EA_SP) >> 16; prop = FIELD_GET(PCI_EA_SP, dw0);
if (prop > PCI_EA_P_BRIDGE_IO) if (prop > PCI_EA_P_BRIDGE_IO)
goto out; goto out;
@ -3719,14 +3721,13 @@ static int pci_rebar_find_pos(struct pci_dev *pdev, int bar)
return -ENOTSUPP; return -ENOTSUPP;
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >> nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, ctrl);
PCI_REBAR_CTRL_NBAR_SHIFT;
for (i = 0; i < nbars; i++, pos += 8) { for (i = 0; i < nbars; i++, pos += 8) {
int bar_idx; int bar_idx;
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; bar_idx = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, ctrl);
if (bar_idx == bar) if (bar_idx == bar)
return pos; return pos;
} }
@ -3752,14 +3753,14 @@ u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar)
return 0; return 0;
pci_read_config_dword(pdev, pos + PCI_REBAR_CAP, &cap); pci_read_config_dword(pdev, pos + PCI_REBAR_CAP, &cap);
cap &= PCI_REBAR_CAP_SIZES; cap = FIELD_GET(PCI_REBAR_CAP_SIZES, cap);
/* Sapphire RX 5600 XT Pulse has an invalid cap dword for BAR 0 */ /* Sapphire RX 5600 XT Pulse has an invalid cap dword for BAR 0 */
if (pdev->vendor == PCI_VENDOR_ID_ATI && pdev->device == 0x731f && if (pdev->vendor == PCI_VENDOR_ID_ATI && pdev->device == 0x731f &&
bar == 0 && cap == 0x7000) bar == 0 && cap == 0x700)
cap = 0x3f000; return 0x3f00;
return cap >> 4; return cap;
} }
EXPORT_SYMBOL(pci_rebar_get_possible_sizes); EXPORT_SYMBOL(pci_rebar_get_possible_sizes);
@ -3781,7 +3782,7 @@ int pci_rebar_get_current_size(struct pci_dev *pdev, int bar)
return pos; return pos;
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
return (ctrl & PCI_REBAR_CTRL_BAR_SIZE) >> PCI_REBAR_CTRL_BAR_SHIFT; return FIELD_GET(PCI_REBAR_CTRL_BAR_SIZE, ctrl);
} }
/** /**
@ -3804,7 +3805,7 @@ int pci_rebar_set_size(struct pci_dev *pdev, int bar, int size)
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; ctrl |= FIELD_PREP(PCI_REBAR_CTRL_BAR_SIZE, size);
pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
return 0; return 0;
} }
@ -6042,7 +6043,7 @@ int pcix_get_max_mmrbc(struct pci_dev *dev)
if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat)) if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat))
return -EINVAL; return -EINVAL;
return 512 << ((stat & PCI_X_STATUS_MAX_READ) >> 21); return 512 << FIELD_GET(PCI_X_STATUS_MAX_READ, stat);
} }
EXPORT_SYMBOL(pcix_get_max_mmrbc); EXPORT_SYMBOL(pcix_get_max_mmrbc);
@ -6065,7 +6066,7 @@ int pcix_get_mmrbc(struct pci_dev *dev)
if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd)) if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd))
return -EINVAL; return -EINVAL;
return 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2); return 512 << FIELD_GET(PCI_X_CMD_MAX_READ, cmd);
} }
EXPORT_SYMBOL(pcix_get_mmrbc); EXPORT_SYMBOL(pcix_get_mmrbc);
@ -6096,19 +6097,19 @@ int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc)
if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat)) if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat))
return -EINVAL; return -EINVAL;
if (v > (stat & PCI_X_STATUS_MAX_READ) >> 21) if (v > FIELD_GET(PCI_X_STATUS_MAX_READ, stat))
return -E2BIG; return -E2BIG;
if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd)) if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd))
return -EINVAL; return -EINVAL;
o = (cmd & PCI_X_CMD_MAX_READ) >> 2; o = FIELD_GET(PCI_X_CMD_MAX_READ, cmd);
if (o != v) { if (o != v) {
if (v > o && (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_MMRBC)) if (v > o && (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_MMRBC))
return -EIO; return -EIO;
cmd &= ~PCI_X_CMD_MAX_READ; cmd &= ~PCI_X_CMD_MAX_READ;
cmd |= v << 2; cmd |= FIELD_PREP(PCI_X_CMD_MAX_READ, v);
if (pci_write_config_word(dev, cap + PCI_X_CMD, cmd)) if (pci_write_config_word(dev, cap + PCI_X_CMD, cmd))
return -EIO; return -EIO;
} }
@ -6128,7 +6129,7 @@ int pcie_get_readrq(struct pci_dev *dev)
pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl); pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl);
return 128 << ((ctl & PCI_EXP_DEVCTL_READRQ) >> 12); return 128 << FIELD_GET(PCI_EXP_DEVCTL_READRQ, ctl);
} }
EXPORT_SYMBOL(pcie_get_readrq); EXPORT_SYMBOL(pcie_get_readrq);
@ -6161,7 +6162,7 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
rq = mps; rq = mps;
} }
v = (ffs(rq) - 8) << 12; v = FIELD_PREP(PCI_EXP_DEVCTL_READRQ, ffs(rq) - 8);
if (bridge->no_inc_mrrs) { if (bridge->no_inc_mrrs) {
int max_mrrs = pcie_get_readrq(dev); int max_mrrs = pcie_get_readrq(dev);
@ -6191,7 +6192,7 @@ int pcie_get_mps(struct pci_dev *dev)
pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl); pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl);
return 128 << ((ctl & PCI_EXP_DEVCTL_PAYLOAD) >> 5); return 128 << FIELD_GET(PCI_EXP_DEVCTL_PAYLOAD, ctl);
} }
EXPORT_SYMBOL(pcie_get_mps); EXPORT_SYMBOL(pcie_get_mps);
@ -6214,7 +6215,7 @@ int pcie_set_mps(struct pci_dev *dev, int mps)
v = ffs(mps) - 8; v = ffs(mps) - 8;
if (v > dev->pcie_mpss) if (v > dev->pcie_mpss)
return -EINVAL; return -EINVAL;
v <<= 5; v = FIELD_PREP(PCI_EXP_DEVCTL_PAYLOAD, v);
ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_PAYLOAD, v); PCI_EXP_DEVCTL_PAYLOAD, v);
@ -6256,9 +6257,9 @@ u32 pcie_bandwidth_available(struct pci_dev *dev, struct pci_dev **limiting_dev,
while (dev) { while (dev) {
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
next_speed = pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS]; next_speed = pcie_link_speed[FIELD_GET(PCI_EXP_LNKSTA_CLS,
next_width = (lnksta & PCI_EXP_LNKSTA_NLW) >> lnksta)];
PCI_EXP_LNKSTA_NLW_SHIFT; next_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta);
next_bw = next_width * PCIE_SPEED2MBS_ENC(next_speed); next_bw = next_width * PCIE_SPEED2MBS_ENC(next_speed);
@ -6330,7 +6331,7 @@ enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev)
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
if (lnkcap) if (lnkcap)
return (lnkcap & PCI_EXP_LNKCAP_MLW) >> 4; return FIELD_GET(PCI_EXP_LNKCAP_MLW, lnkcap);
return PCIE_LNK_WIDTH_UNKNOWN; return PCIE_LNK_WIDTH_UNKNOWN;
} }

View File

@ -13,6 +13,9 @@
#define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000 #define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000
/* Power stable to PERST# inactive from PCIe card Electromechanical Spec */
#define PCIE_T_PVPERL_MS 100
/* /*
* PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization> * PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization>
* Recommends 1ms to 10ms timeout to check L2 ready. * Recommends 1ms to 10ms timeout to check L2 ready.

View File

@ -1224,6 +1224,28 @@ static irqreturn_t aer_irq(int irq, void *context)
return IRQ_WAKE_THREAD; return IRQ_WAKE_THREAD;
} }
static void aer_enable_irq(struct pci_dev *pdev)
{
int aer = pdev->aer_cap;
u32 reg32;
/* Enable Root Port's interrupt in response to error messages */
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 |= ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
}
static void aer_disable_irq(struct pci_dev *pdev)
{
int aer = pdev->aer_cap;
u32 reg32;
/* Disable Root's interrupt in response to error messages */
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
}
/** /**
* aer_enable_rootport - enable Root Port's interrupts when receiving messages * aer_enable_rootport - enable Root Port's interrupts when receiving messages
* @rpc: pointer to a Root Port data structure * @rpc: pointer to a Root Port data structure
@ -1253,10 +1275,7 @@ static void aer_enable_rootport(struct aer_rpc *rpc)
pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32); pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32);
pci_write_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, reg32); pci_write_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, reg32);
/* Enable Root Port's interrupt in response to error messages */ aer_enable_irq(pdev);
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 |= ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
} }
/** /**
@ -1271,10 +1290,7 @@ static void aer_disable_rootport(struct aer_rpc *rpc)
int aer = pdev->aer_cap; int aer = pdev->aer_cap;
u32 reg32; u32 reg32;
/* Disable Root's interrupt in response to error messages */ aer_disable_irq(pdev);
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
/* Clear Root's error status reg */ /* Clear Root's error status reg */
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32); pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32);
@ -1369,12 +1385,8 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
*/ */
aer = root ? root->aer_cap : 0; aer = root ? root->aer_cap : 0;
if ((host->native_aer || pcie_ports_native) && aer) { if ((host->native_aer || pcie_ports_native) && aer)
/* Disable Root's interrupt in response to error messages */ aer_disable_irq(root);
pci_read_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, reg32);
}
if (type == PCI_EXP_TYPE_RC_EC || type == PCI_EXP_TYPE_RC_END) { if (type == PCI_EXP_TYPE_RC_EC || type == PCI_EXP_TYPE_RC_END) {
rc = pcie_reset_flr(dev, PCI_RESET_DO_RESET); rc = pcie_reset_flr(dev, PCI_RESET_DO_RESET);
@ -1393,10 +1405,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
pci_read_config_dword(root, aer + PCI_ERR_ROOT_STATUS, &reg32); pci_read_config_dword(root, aer + PCI_ERR_ROOT_STATUS, &reg32);
pci_write_config_dword(root, aer + PCI_ERR_ROOT_STATUS, reg32); pci_write_config_dword(root, aer + PCI_ERR_ROOT_STATUS, reg32);
/* Enable Root Port's interrupt in response to error messages */ aer_enable_irq(root);
pci_read_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 |= ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, reg32);
} }
return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;

View File

@ -7,7 +7,9 @@
* Copyright (C) Shaohua Li (shaohua.li@intel.com) * Copyright (C) Shaohua Li (shaohua.li@intel.com)
*/ */
#include <linux/bitfield.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/limits.h>
#include <linux/math.h> #include <linux/math.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
@ -16,9 +18,10 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/printk.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/jiffies.h> #include <linux/time.h>
#include <linux/delay.h>
#include "../pci.h" #include "../pci.h"
#ifdef MODULE_PARAM_PREFIX #ifdef MODULE_PARAM_PREFIX
@ -267,10 +270,10 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
/* Convert L0s latency encoding to ns */ /* Convert L0s latency encoding to ns */
static u32 calc_l0s_latency(u32 lnkcap) static u32 calc_l0s_latency(u32 lnkcap)
{ {
u32 encoding = (lnkcap & PCI_EXP_LNKCAP_L0SEL) >> 12; u32 encoding = FIELD_GET(PCI_EXP_LNKCAP_L0SEL, lnkcap);
if (encoding == 0x7) if (encoding == 0x7)
return (5 * 1000); /* > 4us */ return 5 * NSEC_PER_USEC; /* > 4us */
return (64 << encoding); return (64 << encoding);
} }
@ -278,26 +281,26 @@ static u32 calc_l0s_latency(u32 lnkcap)
static u32 calc_l0s_acceptable(u32 encoding) static u32 calc_l0s_acceptable(u32 encoding)
{ {
if (encoding == 0x7) if (encoding == 0x7)
return -1U; return U32_MAX;
return (64 << encoding); return (64 << encoding);
} }
/* Convert L1 latency encoding to ns */ /* Convert L1 latency encoding to ns */
static u32 calc_l1_latency(u32 lnkcap) static u32 calc_l1_latency(u32 lnkcap)
{ {
u32 encoding = (lnkcap & PCI_EXP_LNKCAP_L1EL) >> 15; u32 encoding = FIELD_GET(PCI_EXP_LNKCAP_L1EL, lnkcap);
if (encoding == 0x7) if (encoding == 0x7)
return (65 * 1000); /* > 64us */ return 65 * NSEC_PER_USEC; /* > 64us */
return (1000 << encoding); return NSEC_PER_USEC << encoding;
} }
/* Convert L1 acceptable latency encoding to ns */ /* Convert L1 acceptable latency encoding to ns */
static u32 calc_l1_acceptable(u32 encoding) static u32 calc_l1_acceptable(u32 encoding)
{ {
if (encoding == 0x7) if (encoding == 0x7)
return -1U; return U32_MAX;
return (1000 << encoding); return NSEC_PER_USEC << encoding;
} }
/* Convert L1SS T_pwr encoding to usec */ /* Convert L1SS T_pwr encoding to usec */
@ -325,33 +328,33 @@ static u32 calc_l12_pwron(struct pci_dev *pdev, u32 scale, u32 val)
*/ */
static void encode_l12_threshold(u32 threshold_us, u32 *scale, u32 *value) static void encode_l12_threshold(u32 threshold_us, u32 *scale, u32 *value)
{ {
u64 threshold_ns = (u64) threshold_us * 1000; u64 threshold_ns = (u64)threshold_us * NSEC_PER_USEC;
/* /*
* LTR_L1.2_THRESHOLD_Value ("value") is a 10-bit field with max * LTR_L1.2_THRESHOLD_Value ("value") is a 10-bit field with max
* value of 0x3ff. * value of 0x3ff.
*/ */
if (threshold_ns <= 0x3ff * 1) { if (threshold_ns <= 1 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) {
*scale = 0; /* Value times 1ns */ *scale = 0; /* Value times 1ns */
*value = threshold_ns; *value = threshold_ns;
} else if (threshold_ns <= 0x3ff * 32) { } else if (threshold_ns <= 32 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) {
*scale = 1; /* Value times 32ns */ *scale = 1; /* Value times 32ns */
*value = roundup(threshold_ns, 32) / 32; *value = roundup(threshold_ns, 32) / 32;
} else if (threshold_ns <= 0x3ff * 1024) { } else if (threshold_ns <= 1024 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) {
*scale = 2; /* Value times 1024ns */ *scale = 2; /* Value times 1024ns */
*value = roundup(threshold_ns, 1024) / 1024; *value = roundup(threshold_ns, 1024) / 1024;
} else if (threshold_ns <= 0x3ff * 32768) { } else if (threshold_ns <= 32768 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) {
*scale = 3; /* Value times 32768ns */ *scale = 3; /* Value times 32768ns */
*value = roundup(threshold_ns, 32768) / 32768; *value = roundup(threshold_ns, 32768) / 32768;
} else if (threshold_ns <= 0x3ff * 1048576) { } else if (threshold_ns <= 1048576 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) {
*scale = 4; /* Value times 1048576ns */ *scale = 4; /* Value times 1048576ns */
*value = roundup(threshold_ns, 1048576) / 1048576; *value = roundup(threshold_ns, 1048576) / 1048576;
} else if (threshold_ns <= 0x3ff * (u64) 33554432) { } else if (threshold_ns <= (u64)33554432 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) {
*scale = 5; /* Value times 33554432ns */ *scale = 5; /* Value times 33554432ns */
*value = roundup(threshold_ns, 33554432) / 33554432; *value = roundup(threshold_ns, 33554432) / 33554432;
} else { } else {
*scale = 5; *scale = 5;
*value = 0x3ff; /* Max representable value */ *value = FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE);
} }
} }
@ -371,11 +374,11 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
link = endpoint->bus->self->link_state; link = endpoint->bus->self->link_state;
/* Calculate endpoint L0s acceptable latency */ /* Calculate endpoint L0s acceptable latency */
encoding = (endpoint->devcap & PCI_EXP_DEVCAP_L0S) >> 6; encoding = FIELD_GET(PCI_EXP_DEVCAP_L0S, endpoint->devcap);
acceptable_l0s = calc_l0s_acceptable(encoding); acceptable_l0s = calc_l0s_acceptable(encoding);
/* Calculate endpoint L1 acceptable latency */ /* Calculate endpoint L1 acceptable latency */
encoding = (endpoint->devcap & PCI_EXP_DEVCAP_L1) >> 9; encoding = FIELD_GET(PCI_EXP_DEVCAP_L1, endpoint->devcap);
acceptable_l1 = calc_l1_acceptable(encoding); acceptable_l1 = calc_l1_acceptable(encoding);
while (link) { while (link) {
@ -417,7 +420,7 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
if ((link->aspm_capable & ASPM_STATE_L1) && if ((link->aspm_capable & ASPM_STATE_L1) &&
(latency + l1_switch_latency > acceptable_l1)) (latency + l1_switch_latency > acceptable_l1))
link->aspm_capable &= ~ASPM_STATE_L1; link->aspm_capable &= ~ASPM_STATE_L1;
l1_switch_latency += 1000; l1_switch_latency += NSEC_PER_USEC;
link = link->parent; link = link->parent;
} }
@ -446,22 +449,24 @@ static void aspm_calc_l12_info(struct pcie_link_state *link,
u32 pl1_2_enables, cl1_2_enables; u32 pl1_2_enables, cl1_2_enables;
/* Choose the greater of the two Port Common_Mode_Restore_Times */ /* Choose the greater of the two Port Common_Mode_Restore_Times */
val1 = (parent_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; val1 = FIELD_GET(PCI_L1SS_CAP_CM_RESTORE_TIME, parent_l1ss_cap);
val2 = (child_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; val2 = FIELD_GET(PCI_L1SS_CAP_CM_RESTORE_TIME, child_l1ss_cap);
t_common_mode = max(val1, val2); t_common_mode = max(val1, val2);
/* Choose the greater of the two Port T_POWER_ON times */ /* Choose the greater of the two Port T_POWER_ON times */
val1 = (parent_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; val1 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_VALUE, parent_l1ss_cap);
scale1 = (parent_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; scale1 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_SCALE, parent_l1ss_cap);
val2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; val2 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_VALUE, child_l1ss_cap);
scale2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; scale2 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_SCALE, child_l1ss_cap);
if (calc_l12_pwron(parent, scale1, val1) > if (calc_l12_pwron(parent, scale1, val1) >
calc_l12_pwron(child, scale2, val2)) { calc_l12_pwron(child, scale2, val2)) {
ctl2 |= scale1 | (val1 << 3); ctl2 |= FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_SCALE, scale1) |
FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_VALUE, val1);
t_power_on = calc_l12_pwron(parent, scale1, val1); t_power_on = calc_l12_pwron(parent, scale1, val1);
} else { } else {
ctl2 |= scale2 | (val2 << 3); ctl2 |= FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_SCALE, scale2) |
FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_VALUE, val2);
t_power_on = calc_l12_pwron(child, scale2, val2); t_power_on = calc_l12_pwron(child, scale2, val2);
} }
@ -477,7 +482,9 @@ static void aspm_calc_l12_info(struct pcie_link_state *link,
*/ */
l1_2_threshold = 2 + 4 + t_common_mode + t_power_on; l1_2_threshold = 2 + 4 + t_common_mode + t_power_on;
encode_l12_threshold(l1_2_threshold, &scale, &value); encode_l12_threshold(l1_2_threshold, &scale, &value);
ctl1 |= t_common_mode << 8 | scale << 29 | value << 16; ctl1 |= FIELD_PREP(PCI_L1SS_CTL1_CM_RESTORE_TIME, t_common_mode) |
FIELD_PREP(PCI_L1SS_CTL1_LTR_L12_TH_VALUE, value) |
FIELD_PREP(PCI_L1SS_CTL1_LTR_L12_TH_SCALE, scale);
/* Some broken devices only support dword access to L1 SS */ /* Some broken devices only support dword access to L1 SS */
pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, &pctl1); pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, &pctl1);
@ -689,10 +696,10 @@ static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state)
* in pcie_config_aspm_link(). * in pcie_config_aspm_link().
*/ */
if (enable_req & (ASPM_STATE_L1_1 | ASPM_STATE_L1_2)) { if (enable_req & (ASPM_STATE_L1_1 | ASPM_STATE_L1_2)) {
pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, pcie_capability_clear_word(child, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPM_L1, 0); PCI_EXP_LNKCTL_ASPM_L1);
pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, pcie_capability_clear_word(parent, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPM_L1, 0); PCI_EXP_LNKCTL_ASPM_L1);
} }
val = 0; val = 0;
@ -1059,7 +1066,8 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
if (state & PCIE_LINK_STATE_L0S) if (state & PCIE_LINK_STATE_L0S)
link->aspm_disable |= ASPM_STATE_L0S; link->aspm_disable |= ASPM_STATE_L0S;
if (state & PCIE_LINK_STATE_L1) if (state & PCIE_LINK_STATE_L1)
link->aspm_disable |= ASPM_STATE_L1; /* L1 PM substates require L1 */
link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS;
if (state & PCIE_LINK_STATE_L1_1) if (state & PCIE_LINK_STATE_L1_1)
link->aspm_disable |= ASPM_STATE_L1_1; link->aspm_disable |= ASPM_STATE_L1_1;
if (state & PCIE_LINK_STATE_L1_2) if (state & PCIE_LINK_STATE_L1_2)
@ -1247,6 +1255,8 @@ static ssize_t aspm_attr_store_common(struct device *dev,
link->aspm_disable &= ~ASPM_STATE_L1; link->aspm_disable &= ~ASPM_STATE_L1;
} else { } else {
link->aspm_disable |= state; link->aspm_disable |= state;
if (state & ASPM_STATE_L1)
link->aspm_disable |= ASPM_STATE_L1SS;
} }
pcie_config_aspm_link(link, policy_to_aspm_state(link)); pcie_config_aspm_link(link, policy_to_aspm_state(link));
@ -1361,10 +1371,10 @@ static int __init pcie_aspm_disable(char *str)
aspm_policy = POLICY_DEFAULT; aspm_policy = POLICY_DEFAULT;
aspm_disabled = 1; aspm_disabled = 1;
aspm_support_enabled = false; aspm_support_enabled = false;
printk(KERN_INFO "PCIe ASPM is disabled\n"); pr_info("PCIe ASPM is disabled\n");
} else if (!strcmp(str, "force")) { } else if (!strcmp(str, "force")) {
aspm_force = 1; aspm_force = 1;
printk(KERN_INFO "PCIe ASPM is forcibly enabled\n"); pr_info("PCIe ASPM is forcibly enabled\n");
} }
return 1; return 1;
} }

View File

@ -9,6 +9,7 @@
#define dev_fmt(fmt) "DPC: " fmt #define dev_fmt(fmt) "DPC: " fmt
#include <linux/aer.h> #include <linux/aer.h>
#include <linux/bitfield.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/init.h> #include <linux/init.h>
@ -17,6 +18,9 @@
#include "portdrv.h" #include "portdrv.h"
#include "../pci.h" #include "../pci.h"
#define PCI_EXP_DPC_CTL_EN_MASK (PCI_EXP_DPC_CTL_EN_FATAL | \
PCI_EXP_DPC_CTL_EN_NONFATAL)
static const char * const rp_pio_error_string[] = { static const char * const rp_pio_error_string[] = {
"Configuration Request received UR Completion", /* Bit Position 0 */ "Configuration Request received UR Completion", /* Bit Position 0 */
"Configuration Request received CA Completion", /* Bit Position 1 */ "Configuration Request received CA Completion", /* Bit Position 1 */
@ -202,7 +206,7 @@ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
/* Get First Error Pointer */ /* Get First Error Pointer */
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &dpc_status); pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &dpc_status);
first_error = (dpc_status & 0x1f00) >> 8; first_error = FIELD_GET(PCI_EXP_DPC_RP_PIO_FEP, dpc_status);
for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) {
if ((status & ~mask) & (1 << i)) if ((status & ~mask) & (1 << i))
@ -270,20 +274,27 @@ void dpc_process_error(struct pci_dev *pdev)
pci_info(pdev, "containment event, status:%#06x source:%#06x\n", pci_info(pdev, "containment event, status:%#06x source:%#06x\n",
status, source); status, source);
reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN;
ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT;
pci_warn(pdev, "%s detected\n", pci_warn(pdev, "%s detected\n",
(reason == 0) ? "unmasked uncorrectable error" : (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR) ?
(reason == 1) ? "ERR_NONFATAL" : "unmasked uncorrectable error" :
(reason == 2) ? "ERR_FATAL" : (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE) ?
(ext_reason == 0) ? "RP PIO error" : "ERR_NONFATAL" :
(ext_reason == 1) ? "software trigger" : (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ?
"ERR_FATAL" :
(ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ?
"RP PIO error" :
(ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ?
"software trigger" :
"reserved error"); "reserved error");
/* show RP PIO error detail information */ /* show RP PIO error detail information */
if (pdev->dpc_rp_extensions && reason == 3 && ext_reason == 0) if (pdev->dpc_rp_extensions &&
reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO)
dpc_process_rp_pio_error(pdev); dpc_process_rp_pio_error(pdev);
else if (reason == 0 && else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
dpc_get_aer_uncorrect_severity(pdev, &info) && dpc_get_aer_uncorrect_severity(pdev, &info) &&
aer_get_device_error_info(pdev, &info)) { aer_get_device_error_info(pdev, &info)) {
aer_print_error(pdev, &info); aer_print_error(pdev, &info);
@ -338,7 +349,7 @@ void pci_dpc_init(struct pci_dev *pdev)
/* Quirks may set dpc_rp_log_size if device or firmware is buggy */ /* Quirks may set dpc_rp_log_size if device or firmware is buggy */
if (!pdev->dpc_rp_log_size) { if (!pdev->dpc_rp_log_size) {
pdev->dpc_rp_log_size = pdev->dpc_rp_log_size =
(cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, cap);
if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) { if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) {
pci_err(pdev, "RP PIO log size %u is invalid\n", pci_err(pdev, "RP PIO log size %u is invalid\n",
pdev->dpc_rp_log_size); pdev->dpc_rp_log_size);
@ -368,12 +379,13 @@ static int dpc_probe(struct pcie_device *dev)
} }
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap); pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap);
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl); pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl);
ctl &= ~PCI_EXP_DPC_CTL_EN_MASK;
ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; ctl |= PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN;
pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl);
pci_info(pdev, "enabled with IRQ %d\n", dev->irq);
pci_info(pdev, "enabled with IRQ %d\n", dev->irq);
pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n",
cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT),
FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP),

View File

@ -9,6 +9,7 @@
#define dev_fmt(fmt) "PME: " fmt #define dev_fmt(fmt) "PME: " fmt
#include <linux/bitfield.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/errno.h> #include <linux/errno.h>
@ -235,7 +236,8 @@ static void pcie_pme_work_fn(struct work_struct *work)
pcie_clear_root_pme_status(port); pcie_clear_root_pme_status(port);
spin_unlock_irq(&data->lock); spin_unlock_irq(&data->lock);
pcie_pme_handle_request(port, rtsta & 0xffff); pcie_pme_handle_request(port,
FIELD_GET(PCI_EXP_RTSTA_PME_RQ_ID, rtsta));
spin_lock_irq(&data->lock); spin_lock_irq(&data->lock);
continue; continue;

View File

@ -6,6 +6,7 @@
* Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
*/ */
#include <linux/bitfield.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
@ -69,7 +70,7 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask,
if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP | if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP |
PCIE_PORT_SERVICE_BWNOTIF)) { PCIE_PORT_SERVICE_BWNOTIF)) {
pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16); pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16);
*pme = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9; *pme = FIELD_GET(PCI_EXP_FLAGS_IRQ, reg16);
nvec = *pme + 1; nvec = *pme + 1;
} }
@ -81,7 +82,7 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask,
if (pos) { if (pos) {
pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS,
&reg32); &reg32);
*aer = (reg32 & PCI_ERR_ROOT_AER_IRQ) >> 27; *aer = FIELD_GET(PCI_ERR_ROOT_AER_IRQ, reg32);
nvec = max(nvec, *aer + 1); nvec = max(nvec, *aer + 1);
} }
} }
@ -92,7 +93,7 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask,
if (pos) { if (pos) {
pci_read_config_word(dev, pos + PCI_EXP_DPC_CAP, pci_read_config_word(dev, pos + PCI_EXP_DPC_CAP,
&reg16); &reg16);
*dpc = reg16 & PCI_EXP_DPC_IRQ; *dpc = FIELD_GET(PCI_EXP_DPC_IRQ, reg16);
nvec = max(nvec, *dpc + 1); nvec = max(nvec, *dpc + 1);
} }
} }

View File

@ -4,6 +4,7 @@
* Copyright (c) 2016, Intel Corporation. * Copyright (c) 2016, Intel Corporation.
*/ */
#include <linux/bitfield.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/pci.h> #include <linux/pci.h>
@ -53,7 +54,7 @@ void pci_ptm_init(struct pci_dev *dev)
pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_PTM, sizeof(u32)); pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_PTM, sizeof(u32));
pci_read_config_dword(dev, ptm + PCI_PTM_CAP, &cap); pci_read_config_dword(dev, ptm + PCI_PTM_CAP, &cap);
dev->ptm_granularity = (cap & PCI_PTM_GRANULARITY_MASK) >> 8; dev->ptm_granularity = FIELD_GET(PCI_PTM_GRANULARITY_MASK, cap);
/* /*
* Per the spec recommendation (PCIe r6.0, sec 7.9.15.3), select the * Per the spec recommendation (PCIe r6.0, sec 7.9.15.3), select the
@ -146,7 +147,7 @@ static int __pci_enable_ptm(struct pci_dev *dev)
ctrl |= PCI_PTM_CTRL_ENABLE; ctrl |= PCI_PTM_CTRL_ENABLE;
ctrl &= ~PCI_PTM_GRANULARITY_MASK; ctrl &= ~PCI_PTM_GRANULARITY_MASK;
ctrl |= dev->ptm_granularity << 8; ctrl |= FIELD_PREP(PCI_PTM_GRANULARITY_MASK, dev->ptm_granularity);
if (dev->ptm_root) if (dev->ptm_root)
ctrl |= PCI_PTM_CTRL_ROOT; ctrl |= PCI_PTM_CTRL_ROOT;

View File

@ -807,8 +807,8 @@ static void pci_set_bus_speed(struct pci_bus *bus)
} }
bus->max_bus_speed = max; bus->max_bus_speed = max;
bus->cur_bus_speed = pcix_bus_speed[ bus->cur_bus_speed =
(status & PCI_X_SSTATUS_FREQ) >> 6]; pcix_bus_speed[FIELD_GET(PCI_X_SSTATUS_FREQ, status)];
return; return;
} }
@ -1217,8 +1217,8 @@ static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub)
offset = ea + PCI_EA_FIRST_ENT; offset = ea + PCI_EA_FIRST_ENT;
pci_read_config_dword(dev, offset, &dw); pci_read_config_dword(dev, offset, &dw);
ea_sec = dw & PCI_EA_SEC_BUS_MASK; ea_sec = FIELD_GET(PCI_EA_SEC_BUS_MASK, dw);
ea_sub = (dw & PCI_EA_SUB_BUS_MASK) >> PCI_EA_SUB_BUS_SHIFT; ea_sub = FIELD_GET(PCI_EA_SUB_BUS_MASK, dw);
if (ea_sec == 0 || ea_sub < ea_sec) if (ea_sec == 0 || ea_sub < ea_sec)
return false; return false;
@ -1652,15 +1652,15 @@ static void pci_set_removable(struct pci_dev *dev)
static bool pci_ext_cfg_is_aliased(struct pci_dev *dev) static bool pci_ext_cfg_is_aliased(struct pci_dev *dev)
{ {
#ifdef CONFIG_PCI_QUIRKS #ifdef CONFIG_PCI_QUIRKS
int pos; int pos, ret;
u32 header, tmp; u32 header, tmp;
pci_read_config_dword(dev, PCI_VENDOR_ID, &header); pci_read_config_dword(dev, PCI_VENDOR_ID, &header);
for (pos = PCI_CFG_SPACE_SIZE; for (pos = PCI_CFG_SPACE_SIZE;
pos < PCI_CFG_SPACE_EXP_SIZE; pos += PCI_CFG_SPACE_SIZE) { pos < PCI_CFG_SPACE_EXP_SIZE; pos += PCI_CFG_SPACE_SIZE) {
if (pci_read_config_dword(dev, pos, &tmp) != PCIBIOS_SUCCESSFUL ret = pci_read_config_dword(dev, pos, &tmp);
|| header != tmp) if ((ret != PCIBIOS_SUCCESSFUL) || (header != tmp))
return false; return false;
} }

View File

@ -690,7 +690,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS100, quirk_ati_
/* /*
* In the AMD NL platform, this device ([1022:7912]) has a class code of * In the AMD NL platform, this device ([1022:7912]) has a class code of
* PCI_CLASS_SERIAL_USB_XHCI (0x0c0330), which means the xhci driver will * PCI_CLASS_SERIAL_USB_XHCI (0x0c0330), which means the xhci driver will
* claim it. * claim it. The same applies on the VanGogh platform device ([1022:163a]).
* *
* But the dwc3 driver is a more specific driver for this device, and we'd * But the dwc3 driver is a more specific driver for this device, and we'd
* prefer to use it instead of xhci. To prevent xhci from claiming the * prefer to use it instead of xhci. To prevent xhci from claiming the
@ -698,7 +698,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS100, quirk_ati_
* defines as "USB device (not host controller)". The dwc3 driver can then * defines as "USB device (not host controller)". The dwc3 driver can then
* claim it based on its Vendor and Device ID. * claim it based on its Vendor and Device ID.
*/ */
static void quirk_amd_nl_class(struct pci_dev *pdev) static void quirk_amd_dwc_class(struct pci_dev *pdev)
{ {
u32 class = pdev->class; u32 class = pdev->class;
@ -708,7 +708,9 @@ static void quirk_amd_nl_class(struct pci_dev *pdev)
class, pdev->class); class, pdev->class);
} }
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB, DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB,
quirk_amd_nl_class); quirk_amd_dwc_class);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VANGOGH_USB,
quirk_amd_dwc_class);
/* /*
* Synopsys USB 3.x host HAPS platform has a class code of * Synopsys USB 3.x host HAPS platform has a class code of
@ -1844,8 +1846,8 @@ static void quirk_jmicron_ata(struct pci_dev *pdev)
/* Update pdev accordingly */ /* Update pdev accordingly */
pci_read_config_byte(pdev, PCI_HEADER_TYPE, &hdr); pci_read_config_byte(pdev, PCI_HEADER_TYPE, &hdr);
pdev->hdr_type = hdr & 0x7f; pdev->hdr_type = hdr & PCI_HEADER_TYPE_MASK;
pdev->multifunction = !!(hdr & 0x80); pdev->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr);
pci_read_config_dword(pdev, PCI_CLASS_REVISION, &class); pci_read_config_dword(pdev, PCI_CLASS_REVISION, &class);
pdev->class = class >> 8; pdev->class = class >> 8;
@ -4553,9 +4555,9 @@ static void quirk_disable_root_port_attributes(struct pci_dev *pdev)
pci_info(root_port, "Disabling No Snoop/Relaxed Ordering Attributes to avoid PCIe Completion erratum in %s\n", pci_info(root_port, "Disabling No Snoop/Relaxed Ordering Attributes to avoid PCIe Completion erratum in %s\n",
dev_name(&pdev->dev)); dev_name(&pdev->dev));
pcie_capability_clear_and_set_word(root_port, PCI_EXP_DEVCTL, pcie_capability_clear_word(root_port, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_RELAX_EN | PCI_EXP_DEVCTL_RELAX_EN |
PCI_EXP_DEVCTL_NOSNOOP_EN, 0); PCI_EXP_DEVCTL_NOSNOOP_EN);
} }
/* /*
@ -5383,7 +5385,7 @@ int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
*/ */
static void quirk_intel_qat_vf_cap(struct pci_dev *pdev) static void quirk_intel_qat_vf_cap(struct pci_dev *pdev)
{ {
int pos, i = 0; int pos, i = 0, ret;
u8 next_cap; u8 next_cap;
u16 reg16, *cap; u16 reg16, *cap;
struct pci_cap_saved_state *state; struct pci_cap_saved_state *state;
@ -5429,8 +5431,8 @@ static void quirk_intel_qat_vf_cap(struct pci_dev *pdev)
pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD;
pdev->cfg_size = PCI_CFG_SPACE_EXP_SIZE; pdev->cfg_size = PCI_CFG_SPACE_EXP_SIZE;
if (pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &status) != ret = pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &status);
PCIBIOS_SUCCESSFUL || (status == 0xffffffff)) if ((ret != PCIBIOS_SUCCESSFUL) || (PCI_POSSIBLE_ERROR(status)))
pdev->cfg_size = PCI_CFG_SPACE_SIZE; pdev->cfg_size = PCI_CFG_SPACE_SIZE;
if (pci_find_saved_cap(pdev, PCI_CAP_ID_EXP)) if (pci_find_saved_cap(pdev, PCI_CAP_ID_EXP))
@ -5507,6 +5509,12 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0420, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
#ifdef CONFIG_PCI_ATS #ifdef CONFIG_PCI_ATS
static void quirk_no_ats(struct pci_dev *pdev)
{
pci_info(pdev, "disabling ATS\n");
pdev->ats_cap = 0;
}
/* /*
* Some devices require additional driver setup to enable ATS. Don't use * Some devices require additional driver setup to enable ATS. Don't use
* ATS for those devices as ATS will be enabled before the driver has had a * ATS for those devices as ATS will be enabled before the driver has had a
@ -5520,14 +5528,10 @@ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
(pdev->subsystem_device == 0xce19 || (pdev->subsystem_device == 0xce19 ||
pdev->subsystem_device == 0xcc10 || pdev->subsystem_device == 0xcc10 ||
pdev->subsystem_device == 0xcc08)) pdev->subsystem_device == 0xcc08))
goto no_ats; quirk_no_ats(pdev);
else } else {
return; quirk_no_ats(pdev);
} }
no_ats:
pci_info(pdev, "disabling ATS\n");
pdev->ats_cap = 0;
} }
/* AMD Stoney platform GPU */ /* AMD Stoney platform GPU */
@ -5550,6 +5554,25 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7347, quirk_amd_harvest_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x734f, quirk_amd_harvest_no_ats); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x734f, quirk_amd_harvest_no_ats);
/* AMD Raven platform iGPU */ /* AMD Raven platform iGPU */
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x15d8, quirk_amd_harvest_no_ats); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x15d8, quirk_amd_harvest_no_ats);
/*
* Intel IPU E2000 revisions before C0 implement incorrect endianness
* in ATS Invalidate Request message body. Disable ATS for those devices.
*/
static void quirk_intel_e2000_no_ats(struct pci_dev *pdev)
{
if (pdev->revision < 0x20)
quirk_no_ats(pdev);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1451, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1452, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1453, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1454, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1455, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1457, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1459, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x145a, quirk_intel_e2000_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x145c, quirk_intel_e2000_no_ats);
#endif /* CONFIG_PCI_ATS */ #endif /* CONFIG_PCI_ATS */
/* Freescale PCIe doesn't support MSI in RC mode */ /* Freescale PCIe doesn't support MSI in RC mode */
@ -5666,7 +5689,7 @@ static void quirk_nvidia_hda(struct pci_dev *gpu)
/* The GPU becomes a multi-function device when the HDA is enabled */ /* The GPU becomes a multi-function device when the HDA is enabled */
pci_read_config_byte(gpu, PCI_HEADER_TYPE, &hdr_type); pci_read_config_byte(gpu, PCI_HEADER_TYPE, &hdr_type);
gpu->multifunction = !!(hdr_type & 0x80); gpu->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr_type);
} }
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
PCI_BASE_CLASS_DISPLAY, 16, quirk_nvidia_hda); PCI_BASE_CLASS_DISPLAY, 16, quirk_nvidia_hda);
@ -6154,7 +6177,7 @@ static void dpc_log_size(struct pci_dev *dev)
if (!(val & PCI_EXP_DPC_CAP_RP_EXT)) if (!(val & PCI_EXP_DPC_CAP_RP_EXT))
return; return;
if (!((val & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8)) { if (FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, val) == 0) {
pci_info(dev, "Overriding RP PIO Log Size to 4\n"); pci_info(dev, "Overriding RP PIO Log Size to 4\n");
dev->dpc_rp_log_size = 4; dev->dpc_rp_log_size = 4;
} }
@ -6188,3 +6211,15 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5020, of_pci_make_dev_node); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5020, of_pci_make_dev_node);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node);
/*
* Devices known to require a longer delay before first config space access
* after reset recovery or resume from D3cold:
*
* VideoPropulsion (aka Genroco) Torrent QN16e MPEG QAM Modulator
*/
static void pci_fixup_d3cold_delay_1sec(struct pci_dev *pdev)
{
pdev->d3cold_delay = 1000;
}
DECLARE_PCI_FIXUP_FINAL(0x5555, 0x0004, pci_fixup_d3cold_delay_1sec);

View File

@ -363,6 +363,37 @@ struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from)
} }
EXPORT_SYMBOL(pci_get_class); EXPORT_SYMBOL(pci_get_class);
/**
* pci_get_base_class - searching for a PCI device by matching against the base class code only
* @class: search for a PCI device with this base class code
* @from: Previous PCI device found in search, or %NULL for new search.
*
* Iterates through the list of known PCI devices. If a PCI device is found
* with a matching base class code, the reference count to the device is
* incremented. See pci_match_one_device() to figure out how does this works.
* A new search is initiated by passing %NULL as the @from argument.
* Otherwise if @from is not %NULL, searches continue from next device on the
* global list. The reference count for @from is always decremented if it is
* not %NULL.
*
* Returns:
* A pointer to a matched PCI device, %NULL Otherwise.
*/
struct pci_dev *pci_get_base_class(unsigned int class, struct pci_dev *from)
{
struct pci_device_id id = {
.vendor = PCI_ANY_ID,
.device = PCI_ANY_ID,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.class_mask = 0xFF0000,
.class = class << 16,
};
return pci_get_dev_by_id(&id, from);
}
EXPORT_SYMBOL(pci_get_base_class);
/** /**
* pci_dev_present - Returns 1 if device matching the device list is present, 0 if not. * pci_dev_present - Returns 1 if device matching the device list is present, 0 if not.
* @ids: A pointer to a null terminated list of struct pci_device_id structures * @ids: A pointer to a null terminated list of struct pci_device_id structures

View File

@ -2129,7 +2129,7 @@ dump:
pci_bus_dump_resources(bus); pci_bus_dump_resources(bus);
} }
void __init pci_assign_unassigned_resources(void) void pci_assign_unassigned_resources(void)
{ {
struct pci_bus *root_bus; struct pci_bus *root_bus;

View File

@ -6,6 +6,7 @@
* Author: Alex Williamson <alex.williamson@redhat.com> * Author: Alex Williamson <alex.williamson@redhat.com>
*/ */
#include <linux/bitfield.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
@ -201,9 +202,9 @@ static int pci_vc_do_save_buffer(struct pci_dev *dev, int pos,
/* Extended VC Count (not counting VC0) */ /* Extended VC Count (not counting VC0) */
evcc = cap1 & PCI_VC_CAP1_EVCC; evcc = cap1 & PCI_VC_CAP1_EVCC;
/* Low Priority Extended VC Count (not counting VC0) */ /* Low Priority Extended VC Count (not counting VC0) */
lpevcc = (cap1 & PCI_VC_CAP1_LPEVCC) >> 4; lpevcc = FIELD_GET(PCI_VC_CAP1_LPEVCC, cap1);
/* Port Arbitration Table Entry Size (bits) */ /* Port Arbitration Table Entry Size (bits) */
parb_size = 1 << ((cap1 & PCI_VC_CAP1_ARB_SIZE) >> 10); parb_size = 1 << FIELD_GET(PCI_VC_CAP1_ARB_SIZE, cap1);
/* /*
* Port VC Control Register contains VC Arbitration Select, which * Port VC Control Register contains VC Arbitration Select, which
@ -231,7 +232,7 @@ static int pci_vc_do_save_buffer(struct pci_dev *dev, int pos,
int vcarb_offset; int vcarb_offset;
pci_read_config_dword(dev, pos + PCI_VC_PORT_CAP2, &cap2); pci_read_config_dword(dev, pos + PCI_VC_PORT_CAP2, &cap2);
vcarb_offset = ((cap2 & PCI_VC_CAP2_ARB_OFF) >> 24) * 16; vcarb_offset = FIELD_GET(PCI_VC_CAP2_ARB_OFF, cap2) * 16;
if (vcarb_offset) { if (vcarb_offset) {
int size, vcarb_phases = 0; int size, vcarb_phases = 0;
@ -277,7 +278,7 @@ static int pci_vc_do_save_buffer(struct pci_dev *dev, int pos,
pci_read_config_dword(dev, pos + PCI_VC_RES_CAP + pci_read_config_dword(dev, pos + PCI_VC_RES_CAP +
(i * PCI_CAP_VC_PER_VC_SIZEOF), &cap); (i * PCI_CAP_VC_PER_VC_SIZEOF), &cap);
parb_offset = ((cap & PCI_VC_RES_CAP_ARB_OFF) >> 24) * 16; parb_offset = FIELD_GET(PCI_VC_RES_CAP_ARB_OFF, cap) * 16;
if (parb_offset) { if (parb_offset) {
int size, parb_phases = 0; int size, parb_phases = 0;

View File

@ -764,10 +764,6 @@ static bool vga_arbiter_add_pci_device(struct pci_dev *pdev)
struct pci_dev *bridge; struct pci_dev *bridge;
u16 cmd; u16 cmd;
/* Only deal with VGA class devices */
if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
return false;
/* Allocate structure */ /* Allocate structure */
vgadev = kzalloc(sizeof(struct vga_device), GFP_KERNEL); vgadev = kzalloc(sizeof(struct vga_device), GFP_KERNEL);
if (vgadev == NULL) { if (vgadev == NULL) {
@ -1503,6 +1499,10 @@ static int pci_notify(struct notifier_block *nb, unsigned long action,
vgaarb_dbg(dev, "%s\n", __func__); vgaarb_dbg(dev, "%s\n", __func__);
/* Only deal with VGA class devices */
if (!pci_is_vga(pdev))
return 0;
/* /*
* For now, we're only interested in devices added and removed. * For now, we're only interested in devices added and removed.
* I didn't test this thing here, so someone needs to double check * I didn't test this thing here, so someone needs to double check
@ -1550,8 +1550,10 @@ static int __init vga_arb_device_init(void)
pdev = NULL; pdev = NULL;
while ((pdev = while ((pdev =
pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
PCI_ANY_ID, pdev)) != NULL) PCI_ANY_ID, pdev)) != NULL) {
if (pci_is_vga(pdev))
vga_arbiter_add_pci_device(pdev); vga_arbiter_add_pci_device(pdev);
}
pr_info("loaded\n"); pr_info("loaded\n");
return rc; return rc;

View File

@ -761,12 +761,14 @@ static void ipr_mask_and_clear_interrupts(struct ipr_ioa_cfg *ioa_cfg,
static int ipr_save_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg) static int ipr_save_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
{ {
int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX); int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
int rc;
if (pcix_cmd_reg == 0) if (pcix_cmd_reg == 0)
return 0; return 0;
if (pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD, rc = pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD,
&ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) { &ioa_cfg->saved_pcix_cmd_reg);
if (rc != PCIBIOS_SUCCESSFUL) {
dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n"); dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n");
return -EIO; return -EIO;
} }
@ -785,10 +787,12 @@ static int ipr_save_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
static int ipr_set_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg) static int ipr_set_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
{ {
int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX); int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
int rc;
if (pcix_cmd_reg) { if (pcix_cmd_reg) {
if (pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD, rc = pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD,
ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) { ioa_cfg->saved_pcix_cmd_reg);
if (rc != PCIBIOS_SUCCESSFUL) {
dev_err(&ioa_cfg->pdev->dev, "Failed to setup PCI-X command register\n"); dev_err(&ioa_cfg->pdev->dev, "Failed to setup PCI-X command register\n");
return -EIO; return -EIO;
} }

View File

@ -39,9 +39,6 @@ struct logic_pio_host_ops {
#ifdef CONFIG_INDIRECT_PIO #ifdef CONFIG_INDIRECT_PIO
u8 logic_inb(unsigned long addr); u8 logic_inb(unsigned long addr);
void logic_outb(u8 value, unsigned long addr);
void logic_outw(u16 value, unsigned long addr);
void logic_outl(u32 value, unsigned long addr);
u16 logic_inw(unsigned long addr); u16 logic_inw(unsigned long addr);
u32 logic_inl(unsigned long addr); u32 logic_inl(unsigned long addr);
void logic_outb(u8 value, unsigned long addr); void logic_outb(u8 value, unsigned long addr);

View File

@ -713,6 +713,30 @@ static inline bool pci_is_bridge(struct pci_dev *dev)
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; dev->hdr_type == PCI_HEADER_TYPE_CARDBUS;
} }
/**
* pci_is_vga - check if the PCI device is a VGA device
*
* The PCI Code and ID Assignment spec, r1.15, secs 1.4 and 1.1, define
* VGA Base Class and Sub-Classes:
*
* 03 00 PCI_CLASS_DISPLAY_VGA VGA-compatible or 8514-compatible
* 00 01 PCI_CLASS_NOT_DEFINED_VGA VGA-compatible (before Class Code)
*
* Return true if the PCI device is a VGA device and uses the legacy VGA
* resources ([mem 0xa0000-0xbffff], [io 0x3b0-0x3bb], [io 0x3c0-0x3df] and
* aliases).
*/
static inline bool pci_is_vga(struct pci_dev *pdev)
{
if ((pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA)
return true;
if ((pdev->class >> 8) == PCI_CLASS_NOT_DEFINED_VGA)
return true;
return false;
}
#define for_each_pci_bridge(dev, bus) \ #define for_each_pci_bridge(dev, bus) \
list_for_each_entry(dev, &bus->devices, bus_list) \ list_for_each_entry(dev, &bus->devices, bus_list) \
if (!pci_is_bridge(dev)) {} else if (!pci_is_bridge(dev)) {} else
@ -1181,6 +1205,8 @@ struct pci_dev *pci_get_slot(struct pci_bus *bus, unsigned int devfn);
struct pci_dev *pci_get_domain_bus_and_slot(int domain, unsigned int bus, struct pci_dev *pci_get_domain_bus_and_slot(int domain, unsigned int bus,
unsigned int devfn); unsigned int devfn);
struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from); struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from);
struct pci_dev *pci_get_base_class(unsigned int class, struct pci_dev *from);
int pci_dev_present(const struct pci_device_id *ids); int pci_dev_present(const struct pci_device_id *ids);
int pci_bus_read_config_byte(struct pci_bus *bus, unsigned int devfn, int pci_bus_read_config_byte(struct pci_bus *bus, unsigned int devfn,
@ -1950,6 +1976,9 @@ static inline struct pci_dev *pci_get_class(unsigned int class,
struct pci_dev *from) struct pci_dev *from)
{ return NULL; } { return NULL; }
static inline struct pci_dev *pci_get_base_class(unsigned int class,
struct pci_dev *from)
{ return NULL; }
static inline int pci_dev_present(const struct pci_device_id *ids) static inline int pci_dev_present(const struct pci_device_id *ids)
{ return 0; } { return 0; }

View File

@ -582,6 +582,7 @@
#define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F3 0x16fb #define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F3 0x16fb
#define PCI_DEVICE_ID_AMD_MI200_DF_F3 0x14d3 #define PCI_DEVICE_ID_AMD_MI200_DF_F3 0x14d3
#define PCI_DEVICE_ID_AMD_MI300_DF_F3 0x152b #define PCI_DEVICE_ID_AMD_MI300_DF_F3 0x152b
#define PCI_DEVICE_ID_AMD_VANGOGH_USB 0x163a
#define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703
#define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE 0x2000
#define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001

View File

@ -80,6 +80,7 @@
#define PCI_HEADER_TYPE_NORMAL 0 #define PCI_HEADER_TYPE_NORMAL 0
#define PCI_HEADER_TYPE_BRIDGE 1 #define PCI_HEADER_TYPE_BRIDGE 1
#define PCI_HEADER_TYPE_CARDBUS 2 #define PCI_HEADER_TYPE_CARDBUS 2
#define PCI_HEADER_TYPE_MFD 0x80 /* Multi-Function Device (possible) */
#define PCI_BIST 0x0f /* 8 bits */ #define PCI_BIST 0x0f /* 8 bits */
#define PCI_BIST_CODE_MASK 0x0f /* Return result */ #define PCI_BIST_CODE_MASK 0x0f /* Return result */
@ -637,6 +638,7 @@
#define PCI_EXP_RTCAP 0x1e /* Root Capabilities */ #define PCI_EXP_RTCAP 0x1e /* Root Capabilities */
#define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */ #define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */
#define PCI_EXP_RTSTA 0x20 /* Root Status */ #define PCI_EXP_RTSTA 0x20 /* Root Status */
#define PCI_EXP_RTSTA_PME_RQ_ID 0x0000ffff /* PME Requester ID */
#define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */ #define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */
#define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */ #define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */
/* /*
@ -930,12 +932,13 @@
/* Process Address Space ID */ /* Process Address Space ID */
#define PCI_PASID_CAP 0x04 /* PASID feature register */ #define PCI_PASID_CAP 0x04 /* PASID feature register */
#define PCI_PASID_CAP_EXEC 0x02 /* Exec permissions Supported */ #define PCI_PASID_CAP_EXEC 0x0002 /* Exec permissions Supported */
#define PCI_PASID_CAP_PRIV 0x04 /* Privilege Mode Supported */ #define PCI_PASID_CAP_PRIV 0x0004 /* Privilege Mode Supported */
#define PCI_PASID_CAP_WIDTH 0x1f00
#define PCI_PASID_CTRL 0x06 /* PASID control register */ #define PCI_PASID_CTRL 0x06 /* PASID control register */
#define PCI_PASID_CTRL_ENABLE 0x01 /* Enable bit */ #define PCI_PASID_CTRL_ENABLE 0x0001 /* Enable bit */
#define PCI_PASID_CTRL_EXEC 0x02 /* Exec permissions Enable */ #define PCI_PASID_CTRL_EXEC 0x0002 /* Exec permissions Enable */
#define PCI_PASID_CTRL_PRIV 0x04 /* Privilege Mode Enable */ #define PCI_PASID_CTRL_PRIV 0x0004 /* Privilege Mode Enable */
#define PCI_EXT_CAP_PASID_SIZEOF 8 #define PCI_EXT_CAP_PASID_SIZEOF 8
/* Single Root I/O Virtualization */ /* Single Root I/O Virtualization */
@ -975,6 +978,8 @@
#define PCI_LTR_VALUE_MASK 0x000003ff #define PCI_LTR_VALUE_MASK 0x000003ff
#define PCI_LTR_SCALE_MASK 0x00001c00 #define PCI_LTR_SCALE_MASK 0x00001c00
#define PCI_LTR_SCALE_SHIFT 10 #define PCI_LTR_SCALE_SHIFT 10
#define PCI_LTR_NOSNOOP_VALUE 0x03ff0000 /* Max No-Snoop Latency Value */
#define PCI_LTR_NOSNOOP_SCALE 0x1c000000 /* Scale for Max Value */
#define PCI_EXT_CAP_LTR_SIZEOF 8 #define PCI_EXT_CAP_LTR_SIZEOF 8
/* Access Control Service */ /* Access Control Service */
@ -1042,9 +1047,16 @@
#define PCI_EXP_DPC_STATUS 0x08 /* DPC Status */ #define PCI_EXP_DPC_STATUS 0x08 /* DPC Status */
#define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */ #define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN 0x0006 /* Trigger Reason */ #define PCI_EXP_DPC_STATUS_TRIGGER_RSN 0x0006 /* Trigger Reason */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR 0x0000 /* Uncorrectable error */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE 0x0002 /* Rcvd ERR_NONFATAL */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE 0x0004 /* Rcvd ERR_FATAL */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT 0x0006 /* Reason in Trig Reason Extension field */
#define PCI_EXP_DPC_STATUS_INTERRUPT 0x0008 /* Interrupt Status */ #define PCI_EXP_DPC_STATUS_INTERRUPT 0x0008 /* Interrupt Status */
#define PCI_EXP_DPC_RP_BUSY 0x0010 /* Root Port Busy */ #define PCI_EXP_DPC_RP_BUSY 0x0010 /* Root Port Busy */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT 0x0060 /* Trig Reason Extension */ #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT 0x0060 /* Trig Reason Extension */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO 0x0000 /* RP PIO error */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER 0x0020 /* DPC SW Trigger bit */
#define PCI_EXP_DPC_RP_PIO_FEP 0x1f00 /* RP PIO First Err Ptr */
#define PCI_EXP_DPC_SOURCE_ID 0x0A /* DPC Source Identifier */ #define PCI_EXP_DPC_SOURCE_ID 0x0A /* DPC Source Identifier */
@ -1088,6 +1100,8 @@
#define PCI_L1SS_CTL1_LTR_L12_TH_VALUE 0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */ #define PCI_L1SS_CTL1_LTR_L12_TH_VALUE 0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */
#define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */ #define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */
#define PCI_L1SS_CTL2 0x0c /* Control 2 Register */ #define PCI_L1SS_CTL2 0x0c /* Control 2 Register */
#define PCI_L1SS_CTL2_T_PWR_ON_SCALE 0x00000003 /* T_POWER_ON Scale */
#define PCI_L1SS_CTL2_T_PWR_ON_VALUE 0x000000f8 /* T_POWER_ON Value */
/* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */ /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
#define PCI_DVSEC_HEADER1 0x4 /* Designated Vendor-Specific Header1 */ #define PCI_DVSEC_HEADER1 0x4 /* Designated Vendor-Specific Header1 */

View File

@ -1417,17 +1417,11 @@ static bool atpx_present(void)
acpi_handle dhandle, atpx_handle; acpi_handle dhandle, atpx_handle;
acpi_status status; acpi_status status;
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) {
dhandle = ACPI_HANDLE(&pdev->dev); if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) &&
if (dhandle) { (pdev->class != PCI_CLASS_DISPLAY_OTHER << 8))
status = acpi_get_handle(dhandle, "ATPX", &atpx_handle); continue;
if (ACPI_SUCCESS(status)) {
pci_dev_put(pdev);
return true;
}
}
}
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
dhandle = ACPI_HANDLE(&pdev->dev); dhandle = ACPI_HANDLE(&pdev->dev);
if (dhandle) { if (dhandle) {
status = acpi_get_handle(dhandle, "ATPX", &atpx_handle); status = acpi_get_handle(dhandle, "ATPX", &atpx_handle);