pci-v5.18-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmI7iOwUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vxkuhAAtJkVwfeyUjZ8sms+qWdZaucJmFF1
 PDeKy8O8upLzRRykdWoAOjKKVcCB9ohxBjPMco2oYNTmSozxeau8jjMA9OTQvTOS
 ZhDDoi49/vHRHuq3WIeAMCuk7tH3H1L3f0UHJxJ3H/oObQ+eMsitPcGFK+QrISDX
 pYokOnXZvf7BT7NpVtogSe2mhniOD1zQSicAMiH6WKNHHZcxewrzV9LP3MFOoBAr
 VMhlhzJbOp9spvCt7M1DycJEQ2RNe+wGLBFDalhPuprwnkNchRV+0AwWfD90zc9u
 h/0J8jkXfqS6QfSd/lOlTvI6kGsV8UKZEt4h4X/hlHFebFM5ktD9X7GmcoYUDFd9
 aHV3I/Jf62uGJ31IrT0V/cSYNlMO+IVFwXLGir4B1cFPOkzyIG/i60iV/C6bnnCa
 TCMH6vxalFycYaHBFqw/K/Dlq+mrAX74nQDfbk8y6rprczM1BN220Z8BkpG13TBu
 MxgCEul2/BJmNcPS1IWb/mCfBy+rdrVn2DZuID3J9KTwKNOUTIuAF0FuxLP4Bk4o
 sti3vKIXOcHnAcJB9tEnpEfstPv2JT13eWDIMmp/qCwqcujOvsg/DSYrx+8ogmBF
 DJ/sbPy3BdIOAeTgepWHAxYcv9SlZTGJGl+oaR1zV0qLBogyQUWZ9Ijx5aAEAw3j
 AJicpdk3BkH3LC8=
 =5Q9H
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:
   - Move the VGA arbiter from drivers/gpu to drivers/pci because it's
     PCI-specific, not GPU-specific (Bjorn Helgaas)
   - Select the default VGA device consistently whether it's enumerated
     before or after VGA arbiter init, which fixes arches that enumerate
     PCI devices late (Huacai Chen)

  Resource management:
   - Support BAR sizes up to 8TB (Dongdong Liu)

  PCIe native device hotplug:
   - Fix "Command Completed" tracking to avoid spurious timouts when
     powering off empty slots (Liguang Zhang)
   - Quirk Qualcomm devices that don't implement Command Completed
     correctly, again to avoid spurious timeouts (Manivannan Sadhasivam)

  Peer-to-peer DMA:
   - Add Intel 3rd Gen Intel Xeon Scalable Processors to whitelist
     (Michael J. Ruhl)

  APM X-Gene PCIe controller driver:
   - Revert generic DT parsing changes that broke some machines in the
     field (Marc Zyngier)

  Freescale i.MX6 PCIe controller driver:
   - Allow controller probe to succeed even when no devices currently
     present to allow hot-add later (Fabio Estevam)
   - Enable power management on i.MX6QP (Richard Zhu)
   - Assert CLKREQ# on i.MX8MM so enumeration doesn't hang when no
     device is connected (Richard Zhu)

  Marvell Aardvark PCIe controller driver:
   - Fix MSI and MSI-X support (Marek Behún, Pali Rohár)
   - Add support for ERR and PME interrupts (Pali Rohár)

  Marvell MVEBU PCIe controller driver:
   - Add DT binding and support for "num-lanes" (Pali Rohár)
   - Add support for INTx interrupts (Pali Rohár)

  Microsoft Hyper-V host bridge driver:
   - Avoid unnecessary hypercalls when unmasking IRQs on ARM64 (Boqun
     Feng)

  Qualcomm PCIe controller driver:
   - Add SM8450 DT binding and driver support (Dmitry Baryshkov)

  Renesas R-Car PCIe controller driver:
   - Help the controller get to the L1 state since the hardware can't do
     it on its own (Marek Vasut)
   - Return PCI_ERROR_RESPONSE (~0) for reads that fail on PCIe (Marek
     Vasut)

  SiFive FU740 PCIe controller driver:
   - Drop redundant '-gpios' from DT GPIO lookup (Ben Dooks)
   - Force 2.5GT/s for initial device probe (Ben Dooks)

  Socionext UniPhier Pro5 controller driver:
   - Add NX1 DT binding and driver support (Kunihiko Hayashi)

  Synopsys DesignWare PCIe controller driver:
   - Restore MSI configuration so MSI works after resume (Jisheng
     Zhang)"

* tag 'pci-v5.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (94 commits)
  x86/PCI: Add #includes to asm/pci_x86.h
  PCI: ibmphp: Remove unused assignments
  PCI: cpqphp: Remove unused assignments
  PCI: fu740: Remove unused assignments
  PCI: kirin: Remove unused assignments
  PCI: Remove unused assignments
  PCI: Declare pci_filp_private only when HAVE_PCI_MMAP
  PCI: Avoid broken MSI on SB600 USB devices
  PCI: fu740: Force 2.5GT/s for initial device probe
  PCI: xgene: Revert "PCI: xgene: Fix IB window setup"
  PCI: xgene: Revert "PCI: xgene: Use inbound resources for setup"
  PCI: imx6: Assert i.MX8MM CLKREQ# even if no device present
  PCI: imx6: Invoke the PHY exit function after PHY power off
  PCI: rcar: Use PCI_SET_ERROR_RESPONSE after read which triggered an exception
  PCI: rcar: Finish transition to L1 state in rcar_pcie_config_access()
  PCI: dwc: Restore MSI Receiver mask during resume
  PCI: fu740: Drop redundant '-gpios' from DT GPIO lookup
  PCI/VGA: Replace full MIT license text with SPDX identifier
  PCI/VGA: Use unsigned format string to print lock counts
  PCI/VGA: Log bridge control messages when adding devices
  ...
This commit is contained in:
Linus Torvalds 2022-03-25 13:02:05 -07:00
commit 148a650476
64 changed files with 1591 additions and 764 deletions

View File

@ -77,9 +77,15 @@ and the following optional properties:
- marvell,pcie-lane: the physical PCIe lane number, for ports having - marvell,pcie-lane: the physical PCIe lane number, for ports having
multiple lanes. If this property is not found, we assume that the multiple lanes. If this property is not found, we assume that the
value is 0. value is 0.
- num-lanes: number of SerDes PCIe lanes for this link (1 or 4)
- reset-gpios: optional GPIO to PERST# - reset-gpios: optional GPIO to PERST#
- reset-delay-us: delay in us to wait after reset de-assertion, if not - reset-delay-us: delay in us to wait after reset de-assertion, if not
specified will default to 100ms, as required by the PCIe specification. specified will default to 100ms, as required by the PCIe specification.
- interrupt-names: list of interrupt names, supported are:
- "intx" - interrupt line triggered by one of the legacy interrupt
- interrupts or interrupts-extended: List of the interrupt sources which
corresponding to the "interrupt-names". If non-empty then also additional
'interrupt-controller' subnode must be defined.
Example: Example:
@ -141,6 +147,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 58>; interrupt-map = <0 0 0 0 &mpic 58>;
marvell,pcie-port = <0>; marvell,pcie-port = <0>;
marvell,pcie-lane = <0>; marvell,pcie-lane = <0>;
num-lanes = <1>;
/* low-active PERST# reset on GPIO 25 */ /* low-active PERST# reset on GPIO 25 */
reset-gpios = <&gpio0 25 1>; reset-gpios = <&gpio0 25 1>;
/* wait 20ms for device settle after reset deassertion */ /* wait 20ms for device settle after reset deassertion */
@ -161,6 +168,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 59>; interrupt-map = <0 0 0 0 &mpic 59>;
marvell,pcie-port = <0>; marvell,pcie-port = <0>;
marvell,pcie-lane = <1>; marvell,pcie-lane = <1>;
num-lanes = <1>;
clocks = <&gateclk 6>; clocks = <&gateclk 6>;
}; };
@ -177,6 +185,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 60>; interrupt-map = <0 0 0 0 &mpic 60>;
marvell,pcie-port = <0>; marvell,pcie-port = <0>;
marvell,pcie-lane = <2>; marvell,pcie-lane = <2>;
num-lanes = <1>;
clocks = <&gateclk 7>; clocks = <&gateclk 7>;
}; };
@ -193,6 +202,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 61>; interrupt-map = <0 0 0 0 &mpic 61>;
marvell,pcie-port = <0>; marvell,pcie-port = <0>;
marvell,pcie-lane = <3>; marvell,pcie-lane = <3>;
num-lanes = <1>;
clocks = <&gateclk 8>; clocks = <&gateclk 8>;
}; };
@ -209,6 +219,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 62>; interrupt-map = <0 0 0 0 &mpic 62>;
marvell,pcie-port = <1>; marvell,pcie-port = <1>;
marvell,pcie-lane = <0>; marvell,pcie-lane = <0>;
num-lanes = <1>;
clocks = <&gateclk 9>; clocks = <&gateclk 9>;
}; };
@ -225,6 +236,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 63>; interrupt-map = <0 0 0 0 &mpic 63>;
marvell,pcie-port = <1>; marvell,pcie-port = <1>;
marvell,pcie-lane = <1>; marvell,pcie-lane = <1>;
num-lanes = <1>;
clocks = <&gateclk 10>; clocks = <&gateclk 10>;
}; };
@ -241,6 +253,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 64>; interrupt-map = <0 0 0 0 &mpic 64>;
marvell,pcie-port = <1>; marvell,pcie-port = <1>;
marvell,pcie-lane = <2>; marvell,pcie-lane = <2>;
num-lanes = <1>;
clocks = <&gateclk 11>; clocks = <&gateclk 11>;
}; };
@ -257,6 +270,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 65>; interrupt-map = <0 0 0 0 &mpic 65>;
marvell,pcie-port = <1>; marvell,pcie-port = <1>;
marvell,pcie-lane = <3>; marvell,pcie-lane = <3>;
num-lanes = <1>;
clocks = <&gateclk 12>; clocks = <&gateclk 12>;
}; };
@ -273,6 +287,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 99>; interrupt-map = <0 0 0 0 &mpic 99>;
marvell,pcie-port = <2>; marvell,pcie-port = <2>;
marvell,pcie-lane = <0>; marvell,pcie-lane = <0>;
num-lanes = <1>;
clocks = <&gateclk 26>; clocks = <&gateclk 26>;
}; };
@ -289,6 +304,7 @@ pcie-controller {
interrupt-map = <0 0 0 0 &mpic 103>; interrupt-map = <0 0 0 0 &mpic 103>;
marvell,pcie-port = <3>; marvell,pcie-port = <3>;
marvell,pcie-lane = <0>; marvell,pcie-lane = <0>;
num-lanes = <1>;
clocks = <&gateclk 27>; clocks = <&gateclk 27>;
}; };
}; };

View File

@ -15,6 +15,8 @@
- "qcom,pcie-sc8180x" for sc8180x - "qcom,pcie-sc8180x" for sc8180x
- "qcom,pcie-sdm845" for sdm845 - "qcom,pcie-sdm845" for sdm845
- "qcom,pcie-sm8250" for sm8250 - "qcom,pcie-sm8250" for sm8250
- "qcom,pcie-sm8450-pcie0" for PCIe0 on sm8450
- "qcom,pcie-sm8450-pcie1" for PCIe1 on sm8450
- "qcom,pcie-ipq6018" for ipq6018 - "qcom,pcie-ipq6018" for ipq6018
- reg: - reg:
@ -169,6 +171,24 @@
- "ddrss_sf_tbu" PCIe SF TBU clock - "ddrss_sf_tbu" PCIe SF TBU clock
- "pipe" PIPE clock - "pipe" PIPE clock
- clock-names:
Usage: required for sm8450-pcie0 and sm8450-pcie1
Value type: <stringlist>
Definition: Should contain the following entries
- "aux" Auxiliary clock
- "cfg" Configuration clock
- "bus_master" Master AXI clock
- "bus_slave" Slave AXI clock
- "slave_q2a" Slave Q2A clock
- "tbu" PCIe TBU clock
- "ddrss_sf_tbu" PCIe SF TBU clock
- "pipe" PIPE clock
- "pipe_mux" PIPE MUX
- "phy_pipe" PIPE output clock
- "ref" REFERENCE clock
- "aggre0" Aggre NoC PCIe0 AXI clock, only for sm8450-pcie0
- "aggre1" Aggre NoC PCIe1 AXI clock
- resets: - resets:
Usage: required Usage: required
Value type: <prop-encoded-array> Value type: <prop-encoded-array>
@ -246,7 +266,7 @@
- "ahb" AHB reset - "ahb" AHB reset
- reset-names: - reset-names:
Usage: required for sc8180x, sdm845 and sm8250 Usage: required for sc8180x, sdm845, sm8250 and sm8450
Value type: <stringlist> Value type: <stringlist>
Definition: Should contain the following entries Definition: Should contain the following entries
- "pci" PCIe core reset - "pci" PCIe core reset

View File

@ -20,7 +20,9 @@ allOf:
properties: properties:
compatible: compatible:
const: socionext,uniphier-pro5-pcie-ep enum:
- socionext,uniphier-pro5-pcie-ep
- socionext,uniphier-nx1-pcie-ep
reg: reg:
minItems: 4 minItems: 4
@ -41,20 +43,26 @@ properties:
- const: atu - const: atu
clocks: clocks:
minItems: 1
maxItems: 2 maxItems: 2
clock-names: clock-names:
items: oneOf:
- const: gio - items: # for Pro5
- const: link - const: gio
- const: link
- const: link # for NX1
resets: resets:
minItems: 1
maxItems: 2 maxItems: 2
reset-names: reset-names:
items: oneOf:
- const: gio - items: # for Pro5
- const: link - const: gio
- const: link
- const: link # for NX1
num-ib-windows: num-ib-windows:
const: 16 const: 16

View File

@ -100,7 +100,7 @@ In-kernel interface
.. kernel-doc:: include/linux/vgaarb.h .. kernel-doc:: include/linux/vgaarb.h
:internal: :internal:
.. kernel-doc:: drivers/gpu/vga/vgaarb.c .. kernel-doc:: drivers/pci/vgaarb.c
:export: :export:
libpciaccess libpciaccess

View File

@ -14938,6 +14938,7 @@ F: drivers/pci/controller/mobiveil/pcie-mobiveil*
PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
M: Thomas Petazzoni <thomas.petazzoni@bootlin.com> M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
M: Pali Rohár <pali@kernel.org>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained

View File

@ -1380,8 +1380,6 @@
#define PCIE_IDVAL3_REG 0x43c #define PCIE_IDVAL3_REG 0x43c
#define IDVAL3_CLASS_CODE_MASK 0xffffff #define IDVAL3_CLASS_CODE_MASK 0xffffff
#define IDVAL3_SUBCLASS_SHIFT 8
#define IDVAL3_CLASS_SHIFT 16
#define PCIE_DLSTATUS_REG 0x1048 #define PCIE_DLSTATUS_REG 0x1048
#define DLSTATUS_PHYLINKUP (1 << 13) #define DLSTATUS_PHYLINKUP (1 << 13)

View File

@ -75,7 +75,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SIBYTE, PCI_DEVICE_ID_BCM1250_PCI,
*/ */
static void quirk_sb1250_ht(struct pci_dev *dev) static void quirk_sb1250_ht(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIBYTE, PCI_DEVICE_ID_BCM1250_HT, DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIBYTE, PCI_DEVICE_ID_BCM1250_HT,
quirk_sb1250_ht); quirk_sb1250_ht);

View File

@ -186,7 +186,7 @@ static int __init bcm63xx_register_pcie(void)
/* setup class code as bridge */ /* setup class code as bridge */
val = bcm_pcie_readl(PCIE_IDVAL3_REG); val = bcm_pcie_readl(PCIE_IDVAL3_REG);
val &= ~IDVAL3_CLASS_CODE_MASK; val &= ~IDVAL3_CLASS_CODE_MASK;
val |= (PCI_CLASS_BRIDGE_PCI << IDVAL3_SUBCLASS_SHIFT); val |= PCI_CLASS_BRIDGE_PCI_NORMAL;
bcm_pcie_writel(val, PCIE_IDVAL3_REG); bcm_pcie_writel(val, PCIE_IDVAL3_REG);
/* disable bar1 size */ /* disable bar1 size */

View File

@ -815,7 +815,7 @@ void pnv_pci_shutdown(void)
/* Fixup wrong class code in p7ioc and p8 root complex */ /* Fixup wrong class code in p7ioc and p8 root complex */
static void pnv_p7ioc_rc_quirk(struct pci_dev *dev) static void pnv_p7ioc_rc_quirk(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_IBM, 0x3b9, pnv_p7ioc_rc_quirk); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_IBM, 0x3b9, pnv_p7ioc_rc_quirk);

View File

@ -55,7 +55,7 @@ static void quirk_fsl_pcie_early(struct pci_dev *dev)
if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE)
return; return;
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
fsl_pcie_bus_fixup = 1; fsl_pcie_bus_fixup = 1;
return; return;
} }

View File

@ -314,7 +314,7 @@ static int __init pcie_init(struct sh7786_pcie_port *port)
* class to match. Hardware takes care of propagating the IDSETR * class to match. Hardware takes care of propagating the IDSETR
* settings, so there is no need to bother with a quirk. * settings, so there is no need to bother with a quirk.
*/ */
pci_write_reg(chan, PCI_CLASS_BRIDGE_PCI << 16, SH4A_PCIEIDSETR1); pci_write_reg(chan, PCI_CLASS_BRIDGE_PCI_NORMAL << 8, SH4A_PCIEIDSETR1);
/* Initialize default capabilities. */ /* Initialize default capabilities. */
data = pci_read_reg(chan, SH4A_PCIEEXPCAP0); data = pci_read_reg(chan, SH4A_PCIEEXPCAP0);

View File

@ -5,7 +5,10 @@
* (c) 1999 Martin Mares <mj@ucw.cz> * (c) 1999 Martin Mares <mj@ucw.cz>
*/ */
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/spinlock.h>
#undef DEBUG #undef DEBUG

View File

@ -1,23 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
config VGA_ARB
bool "VGA Arbitration" if EXPERT
default y
depends on (PCI && !S390)
help
Some "legacy" VGA devices implemented on PCI typically have the same
hard-decoded addresses as they did on ISA. When multiple PCI devices
are accessed at same time they need some kind of coordination. Please
see Documentation/gpu/vgaarbiter.rst for more details. Select this to
enable VGA arbiter.
config VGA_ARB_MAX_GPUS
int "Maximum number of GPUs"
default 16
depends on VGA_ARB
help
Reserves space in the kernel to maintain resource locking for
multiple GPUS. The overhead for each GPU is very small.
config VGA_SWITCHEROO config VGA_SWITCHEROO
bool "Laptop Hybrid Graphics - GPU switching support" bool "Laptop Hybrid Graphics - GPU switching support"
depends on X86 depends on X86

View File

@ -1,3 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_VGA_ARB) += vgaarb.o
obj-$(CONFIG_VGA_SWITCHEROO) += vga_switcheroo.o obj-$(CONFIG_VGA_SWITCHEROO) += vga_switcheroo.o

View File

@ -252,6 +252,25 @@ config PCIE_BUS_PEER2PEER
endchoice endchoice
config VGA_ARB
bool "VGA Arbitration" if EXPERT
default y
depends on (PCI && !S390)
help
Some "legacy" VGA devices implemented on PCI typically have the same
hard-decoded addresses as they did on ISA. When multiple PCI devices
are accessed at same time they need some kind of coordination. Please
see Documentation/gpu/vgaarbiter.rst for more details. Select this to
enable VGA arbiter.
config VGA_ARB_MAX_GPUS
int "Maximum number of GPUs"
default 16
depends on VGA_ARB
help
Reserves space in the kernel to maintain resource locking for
multiple GPUS. The overhead for each GPU is very small.
source "drivers/pci/hotplug/Kconfig" source "drivers/pci/hotplug/Kconfig"
source "drivers/pci/controller/Kconfig" source "drivers/pci/controller/Kconfig"
source "drivers/pci/endpoint/Kconfig" source "drivers/pci/endpoint/Kconfig"

View File

@ -30,6 +30,7 @@ obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o
obj-$(CONFIG_PCI_ECAM) += ecam.o obj-$(CONFIG_PCI_ECAM) += ecam.o
obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o
obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o
obj-$(CONFIG_VGA_ARB) += vgaarb.o
# Endpoint library must be initialized before its users # Endpoint library must be initialized before its users
obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ obj-$(CONFIG_PCI_ENDPOINT) += endpoint/

View File

@ -159,9 +159,12 @@ int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
* write happen to have any RW1C (write-one-to-clear) bits set, we * write happen to have any RW1C (write-one-to-clear) bits set, we
* just inadvertently cleared something we shouldn't have. * just inadvertently cleared something we shouldn't have.
*/ */
dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n", if (!bus->unsafe_warn) {
size, pci_domain_nr(bus), bus->number, dev_warn(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
PCI_SLOT(devfn), PCI_FUNC(devfn), where); size, pci_domain_nr(bus), bus->number,
PCI_SLOT(devfn), PCI_FUNC(devfn), where);
bus->unsafe_warn = 1;
}
mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
tmp = readl(addr) & mask; tmp = readl(addr) & mask;

View File

@ -10,6 +10,10 @@ config PCI_MVEBU
depends on ARM depends on ARM
depends on OF depends on OF
select PCI_BRIDGE_EMUL select PCI_BRIDGE_EMUL
help
Add support for Marvell EBU PCIe controller. This PCIe controller
is used on 32-bit Marvell ARM SoCs: Dove, Kirkwood, Armada 370,
Armada XP, Armada 375, Armada 38x and Armada 39x.
config PCI_AARDVARK config PCI_AARDVARK
tristate "Aardvark PCIe controller" tristate "Aardvark PCIe controller"

View File

@ -453,10 +453,6 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
case IMX7D: case IMX7D:
break; break;
case IMX8MM: case IMX8MM:
ret = clk_prepare_enable(imx6_pcie->pcie_aux);
if (ret)
dev_err(dev, "unable to enable pcie_aux clock\n");
break;
case IMX8MQ: case IMX8MQ:
ret = clk_prepare_enable(imx6_pcie->pcie_aux); ret = clk_prepare_enable(imx6_pcie->pcie_aux);
if (ret) { if (ret) {
@ -809,9 +805,7 @@ static int imx6_pcie_start_link(struct dw_pcie *pci)
/* Start LTSSM. */ /* Start LTSSM. */
imx6_pcie_ltssm_enable(dev); imx6_pcie_ltssm_enable(dev);
ret = dw_pcie_wait_for_link(pci); dw_pcie_wait_for_link(pci);
if (ret)
goto err_reset_phy;
if (pci->link_gen == 2) { if (pci->link_gen == 2) {
/* Allow Gen2 mode after the link is up. */ /* Allow Gen2 mode after the link is up. */
@ -847,11 +841,7 @@ static int imx6_pcie_start_link(struct dw_pcie *pci)
} }
/* Make sure link training is finished as well! */ /* Make sure link training is finished as well! */
ret = dw_pcie_wait_for_link(pci); dw_pcie_wait_for_link(pci);
if (ret) {
dev_err(dev, "Failed to bring link up!\n");
goto err_reset_phy;
}
} else { } else {
dev_info(dev, "Link: Gen2 disabled\n"); dev_info(dev, "Link: Gen2 disabled\n");
} }
@ -923,6 +913,7 @@ static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie)
/* Others poke directly at IOMUXC registers */ /* Others poke directly at IOMUXC registers */
switch (imx6_pcie->drvdata->variant) { switch (imx6_pcie->drvdata->variant) {
case IMX6SX: case IMX6SX:
case IMX6QP:
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6SX_GPR12_PCIE_PM_TURN_OFF, IMX6SX_GPR12_PCIE_PM_TURN_OFF,
IMX6SX_GPR12_PCIE_PM_TURN_OFF); IMX6SX_GPR12_PCIE_PM_TURN_OFF);
@ -983,6 +974,7 @@ static int imx6_pcie_suspend_noirq(struct device *dev)
case IMX8MM: case IMX8MM:
if (phy_power_off(imx6_pcie->phy)) if (phy_power_off(imx6_pcie->phy))
dev_err(dev, "unable to power off PHY\n"); dev_err(dev, "unable to power off PHY\n");
phy_exit(imx6_pcie->phy);
break; break;
default: default:
break; break;
@ -1252,7 +1244,8 @@ static const struct imx6_pcie_drvdata drvdata[] = {
[IMX6QP] = { [IMX6QP] = {
.variant = IMX6QP, .variant = IMX6QP,
.flags = IMX6_PCIE_FLAG_IMX6_PHY | .flags = IMX6_PCIE_FLAG_IMX6_PHY |
IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE |
IMX6_PCIE_FLAG_SUPPORTS_SUSPEND,
.dbi_length = 0x200, .dbi_length = 0x200,
}, },
[IMX7D] = { [IMX7D] = {

View File

@ -531,13 +531,13 @@ static void ks_pcie_quirk(struct pci_dev *dev)
struct pci_dev *bridge; struct pci_dev *bridge;
static const struct pci_device_id rc_pci_devids[] = { static const struct pci_device_id rc_pci_devids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2E), { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2E),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L), { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G), { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
{ 0, }, { 0, },
}; };

View File

@ -313,14 +313,14 @@ static int meson_pcie_rd_own_conf(struct pci_bus *bus, u32 devfn,
* cannot program the PCI_CLASS_DEVICE register, so we must fabricate * cannot program the PCI_CLASS_DEVICE register, so we must fabricate
* the return value in the config accessors. * the return value in the config accessors.
*/ */
if (where == PCI_CLASS_REVISION && size == 4) if ((where & ~3) == PCI_CLASS_REVISION) {
*val = (PCI_CLASS_BRIDGE_PCI << 16) | (*val & 0xffff); if (size <= 2)
else if (where == PCI_CLASS_DEVICE && size == 2) *val = (*val & ((1 << (size * 8)) - 1)) << (8 * (where & 3));
*val = PCI_CLASS_BRIDGE_PCI; *val &= ~0xffffff00;
else if (where == PCI_CLASS_DEVICE && size == 1) *val |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
*val = PCI_CLASS_BRIDGE_PCI & 0xff; if (size <= 2)
else if (where == PCI_CLASS_DEVICE + 1 && size == 1) *val = (*val >> (8 * (where & 3))) & ((1 << (size * 8)) - 1);
*val = (PCI_CLASS_BRIDGE_PCI >> 8) & 0xff; }
return PCIBIOS_SUCCESSFUL; return PCIBIOS_SUCCESSFUL;
} }

View File

@ -362,6 +362,12 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (ret < 0) if (ret < 0)
return ret; return ret;
} else if (pp->has_msi_ctrl) { } else if (pp->has_msi_ctrl) {
u32 ctrl, num_ctrls;
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
for (ctrl = 0; ctrl < num_ctrls; ctrl++)
pp->irq_mask[ctrl] = ~0;
if (!pp->msi_irq) { if (!pp->msi_irq) {
pp->msi_irq = platform_get_irq_byname_optional(pdev, "msi"); pp->msi_irq = platform_get_irq_byname_optional(pdev, "msi");
if (pp->msi_irq < 0) { if (pp->msi_irq < 0) {
@ -541,7 +547,6 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
/* Initialize IRQ Status array */ /* Initialize IRQ Status array */
for (ctrl = 0; ctrl < num_ctrls; ctrl++) { for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
pp->irq_mask[ctrl] = ~0;
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE), (ctrl * MSI_REG_CTRL_BLOCK_SIZE),
pp->irq_mask[ctrl]); pp->irq_mask[ctrl]);

View File

@ -181,10 +181,59 @@ static int fu740_pcie_start_link(struct dw_pcie *pci)
{ {
struct device *dev = pci->dev; struct device *dev = pci->dev;
struct fu740_pcie *afp = dev_get_drvdata(dev); struct fu740_pcie *afp = dev_get_drvdata(dev);
u8 cap_exp = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
int ret;
u32 orig, tmp;
/*
* Force 2.5GT/s when starting the link, due to some devices not
* probing at higher speeds. This happens with the PCIe switch
* on the Unmatched board when U-Boot has not initialised the PCIe.
* The fix in U-Boot is to force 2.5GT/s, which then gets cleared
* by the soft reset done by this driver.
*/
dev_dbg(dev, "cap_exp at %x\n", cap_exp);
dw_pcie_dbi_ro_wr_en(pci);
tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP);
orig = tmp & PCI_EXP_LNKCAP_SLS;
tmp &= ~PCI_EXP_LNKCAP_SLS;
tmp |= PCI_EXP_LNKCAP_SLS_2_5GB;
dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp);
/* Enable LTSSM */ /* Enable LTSSM */
writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE); writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE);
return 0;
ret = dw_pcie_wait_for_link(pci);
if (ret) {
dev_err(dev, "error: link did not start\n");
goto err;
}
tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP);
if ((tmp & PCI_EXP_LNKCAP_SLS) != orig) {
dev_dbg(dev, "changing speed back to original\n");
tmp &= ~PCI_EXP_LNKCAP_SLS;
tmp |= orig;
dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp);
tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
tmp |= PORT_LOGIC_SPEED_CHANGE;
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp);
ret = dw_pcie_wait_for_link(pci);
if (ret) {
dev_err(dev, "error: link did not start at new speed\n");
goto err;
}
}
ret = 0;
err:
WARN_ON(ret); /* we assume that errors will be very rare */
dw_pcie_dbi_ro_wr_dis(pci);
return ret;
} }
static int fu740_pcie_host_init(struct pcie_port *pp) static int fu740_pcie_host_init(struct pcie_port *pp)
@ -224,7 +273,7 @@ static int fu740_pcie_host_init(struct pcie_port *pp)
/* Clear hold_phy_rst */ /* Clear hold_phy_rst */
writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST); writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST);
/* Enable pcieauxclk */ /* Enable pcieauxclk */
ret = clk_prepare_enable(afp->pcie_aux); clk_prepare_enable(afp->pcie_aux);
/* Set RC mode */ /* Set RC mode */
writel_relaxed(0x4, afp->mgmt_base + PCIEX8MGMT_DEVICE_TYPE); writel_relaxed(0x4, afp->mgmt_base + PCIEX8MGMT_DEVICE_TYPE);
@ -259,11 +308,11 @@ static int fu740_pcie_probe(struct platform_device *pdev)
return PTR_ERR(afp->mgmt_base); return PTR_ERR(afp->mgmt_base);
/* Fetch GPIOs */ /* Fetch GPIOs */
afp->reset = devm_gpiod_get_optional(dev, "reset-gpios", GPIOD_OUT_LOW); afp->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(afp->reset)) if (IS_ERR(afp->reset))
return dev_err_probe(dev, PTR_ERR(afp->reset), "unable to get reset-gpios\n"); return dev_err_probe(dev, PTR_ERR(afp->reset), "unable to get reset-gpios\n");
afp->pwren = devm_gpiod_get_optional(dev, "pwren-gpios", GPIOD_OUT_LOW); afp->pwren = devm_gpiod_get_optional(dev, "pwren", GPIOD_OUT_LOW);
if (IS_ERR(afp->pwren)) if (IS_ERR(afp->pwren))
return dev_err_probe(dev, PTR_ERR(afp->pwren), "unable to get pwren-gpios\n"); return dev_err_probe(dev, PTR_ERR(afp->pwren), "unable to get pwren-gpios\n");

View File

@ -332,9 +332,6 @@ static int hi3660_pcie_phy_init(struct platform_device *pdev,
pcie->phy_priv = phy; pcie->phy_priv = phy;
phy->dev = dev; phy->dev = dev;
/* registers */
pdev = container_of(dev, struct platform_device, dev);
ret = hi3660_pcie_phy_get_clk(phy); ret = hi3660_pcie_phy_get_clk(phy);
if (ret) if (ret)
return ret; return ret;

View File

@ -161,7 +161,7 @@ struct qcom_pcie_resources_2_3_3 {
/* 6 clocks typically, 7 for sm8250 */ /* 6 clocks typically, 7 for sm8250 */
struct qcom_pcie_resources_2_7_0 { struct qcom_pcie_resources_2_7_0 {
struct clk_bulk_data clks[7]; struct clk_bulk_data clks[9];
int num_clks; int num_clks;
struct regulator_bulk_data supplies[2]; struct regulator_bulk_data supplies[2];
struct reset_control *pci_reset; struct reset_control *pci_reset;
@ -195,6 +195,10 @@ struct qcom_pcie_ops {
struct qcom_pcie_cfg { struct qcom_pcie_cfg {
const struct qcom_pcie_ops *ops; const struct qcom_pcie_ops *ops;
unsigned int pipe_clk_need_muxing:1; unsigned int pipe_clk_need_muxing:1;
unsigned int has_tbu_clk:1;
unsigned int has_ddrss_sf_tbu_clk:1;
unsigned int has_aggre0_clk:1;
unsigned int has_aggre1_clk:1;
}; };
struct qcom_pcie { struct qcom_pcie {
@ -204,8 +208,7 @@ struct qcom_pcie {
union qcom_pcie_resources res; union qcom_pcie_resources res;
struct phy *phy; struct phy *phy;
struct gpio_desc *reset; struct gpio_desc *reset;
const struct qcom_pcie_ops *ops; const struct qcom_pcie_cfg *cfg;
unsigned int pipe_clk_need_muxing:1;
}; };
#define to_qcom_pcie(x) dev_get_drvdata((x)->dev) #define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
@ -229,8 +232,8 @@ static int qcom_pcie_start_link(struct dw_pcie *pci)
struct qcom_pcie *pcie = to_qcom_pcie(pci); struct qcom_pcie *pcie = to_qcom_pcie(pci);
/* Enable Link Training state machine */ /* Enable Link Training state machine */
if (pcie->ops->ltssm_enable) if (pcie->cfg->ops->ltssm_enable)
pcie->ops->ltssm_enable(pcie); pcie->cfg->ops->ltssm_enable(pcie);
return 0; return 0;
} }
@ -1146,6 +1149,7 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
struct dw_pcie *pci = pcie->pci; struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev; struct device *dev = pci->dev;
unsigned int idx;
int ret; int ret;
res->pci_reset = devm_reset_control_get_exclusive(dev, "pci"); res->pci_reset = devm_reset_control_get_exclusive(dev, "pci");
@ -1159,24 +1163,28 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
if (ret) if (ret)
return ret; return ret;
res->clks[0].id = "aux"; idx = 0;
res->clks[1].id = "cfg"; res->clks[idx++].id = "aux";
res->clks[2].id = "bus_master"; res->clks[idx++].id = "cfg";
res->clks[3].id = "bus_slave"; res->clks[idx++].id = "bus_master";
res->clks[4].id = "slave_q2a"; res->clks[idx++].id = "bus_slave";
res->clks[5].id = "tbu"; res->clks[idx++].id = "slave_q2a";
if (of_device_is_compatible(dev->of_node, "qcom,pcie-sm8250")) { if (pcie->cfg->has_tbu_clk)
res->clks[6].id = "ddrss_sf_tbu"; res->clks[idx++].id = "tbu";
res->num_clks = 7; if (pcie->cfg->has_ddrss_sf_tbu_clk)
} else { res->clks[idx++].id = "ddrss_sf_tbu";
res->num_clks = 6; if (pcie->cfg->has_aggre0_clk)
} res->clks[idx++].id = "aggre0";
if (pcie->cfg->has_aggre1_clk)
res->clks[idx++].id = "aggre1";
res->num_clks = idx;
ret = devm_clk_bulk_get(dev, res->num_clks, res->clks); ret = devm_clk_bulk_get(dev, res->num_clks, res->clks);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (pcie->pipe_clk_need_muxing) { if (pcie->cfg->pipe_clk_need_muxing) {
res->pipe_clk_src = devm_clk_get(dev, "pipe_mux"); res->pipe_clk_src = devm_clk_get(dev, "pipe_mux");
if (IS_ERR(res->pipe_clk_src)) if (IS_ERR(res->pipe_clk_src))
return PTR_ERR(res->pipe_clk_src); return PTR_ERR(res->pipe_clk_src);
@ -1209,7 +1217,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
} }
/* Set TCXO as clock source for pcie_pipe_clk_src */ /* Set TCXO as clock source for pcie_pipe_clk_src */
if (pcie->pipe_clk_need_muxing) if (pcie->cfg->pipe_clk_need_muxing)
clk_set_parent(res->pipe_clk_src, res->ref_clk_src); clk_set_parent(res->pipe_clk_src, res->ref_clk_src);
ret = clk_bulk_prepare_enable(res->num_clks, res->clks); ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
@ -1236,6 +1244,9 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
goto err_disable_clocks; goto err_disable_clocks;
} }
/* Wait for reset to complete, required on SM8450 */
usleep_range(1000, 1500);
/* configure PCIe to RC mode */ /* configure PCIe to RC mode */
writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE); writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE);
@ -1284,7 +1295,7 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
/* Set pipe clock as clock source for pcie_pipe_clk_src */ /* Set pipe clock as clock source for pcie_pipe_clk_src */
if (pcie->pipe_clk_need_muxing) if (pcie->cfg->pipe_clk_need_muxing)
clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk); clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk);
return clk_prepare_enable(res->pipe_clk); return clk_prepare_enable(res->pipe_clk);
@ -1384,7 +1395,7 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
qcom_ep_reset_assert(pcie); qcom_ep_reset_assert(pcie);
ret = pcie->ops->init(pcie); ret = pcie->cfg->ops->init(pcie);
if (ret) if (ret)
return ret; return ret;
@ -1392,16 +1403,16 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
if (ret) if (ret)
goto err_deinit; goto err_deinit;
if (pcie->ops->post_init) { if (pcie->cfg->ops->post_init) {
ret = pcie->ops->post_init(pcie); ret = pcie->cfg->ops->post_init(pcie);
if (ret) if (ret)
goto err_disable_phy; goto err_disable_phy;
} }
qcom_ep_reset_deassert(pcie); qcom_ep_reset_deassert(pcie);
if (pcie->ops->config_sid) { if (pcie->cfg->ops->config_sid) {
ret = pcie->ops->config_sid(pcie); ret = pcie->cfg->ops->config_sid(pcie);
if (ret) if (ret)
goto err; goto err;
} }
@ -1410,12 +1421,12 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
err: err:
qcom_ep_reset_assert(pcie); qcom_ep_reset_assert(pcie);
if (pcie->ops->post_deinit) if (pcie->cfg->ops->post_deinit)
pcie->ops->post_deinit(pcie); pcie->cfg->ops->post_deinit(pcie);
err_disable_phy: err_disable_phy:
phy_power_off(pcie->phy); phy_power_off(pcie->phy);
err_deinit: err_deinit:
pcie->ops->deinit(pcie); pcie->cfg->ops->deinit(pcie);
return ret; return ret;
} }
@ -1509,14 +1520,33 @@ static const struct qcom_pcie_cfg ipq4019_cfg = {
static const struct qcom_pcie_cfg sdm845_cfg = { static const struct qcom_pcie_cfg sdm845_cfg = {
.ops = &ops_2_7_0, .ops = &ops_2_7_0,
.has_tbu_clk = true,
}; };
static const struct qcom_pcie_cfg sm8250_cfg = { static const struct qcom_pcie_cfg sm8250_cfg = {
.ops = &ops_1_9_0, .ops = &ops_1_9_0,
.has_tbu_clk = true,
.has_ddrss_sf_tbu_clk = true,
};
static const struct qcom_pcie_cfg sm8450_pcie0_cfg = {
.ops = &ops_1_9_0,
.has_ddrss_sf_tbu_clk = true,
.pipe_clk_need_muxing = true,
.has_aggre0_clk = true,
.has_aggre1_clk = true,
};
static const struct qcom_pcie_cfg sm8450_pcie1_cfg = {
.ops = &ops_1_9_0,
.has_ddrss_sf_tbu_clk = true,
.pipe_clk_need_muxing = true,
.has_aggre1_clk = true,
}; };
static const struct qcom_pcie_cfg sc7280_cfg = { static const struct qcom_pcie_cfg sc7280_cfg = {
.ops = &ops_1_9_0, .ops = &ops_1_9_0,
.has_tbu_clk = true,
.pipe_clk_need_muxing = true, .pipe_clk_need_muxing = true,
}; };
@ -1559,8 +1589,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pcie->pci = pci; pcie->pci = pci;
pcie->ops = pcie_cfg->ops; pcie->cfg = pcie_cfg;
pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing;
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset)) { if (IS_ERR(pcie->reset)) {
@ -1586,7 +1615,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_pm_runtime_put; goto err_pm_runtime_put;
} }
ret = pcie->ops->get_resources(pcie); ret = pcie->cfg->ops->get_resources(pcie);
if (ret) if (ret)
goto err_pm_runtime_put; goto err_pm_runtime_put;
@ -1628,13 +1657,15 @@ static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-sdm845", .data = &sdm845_cfg }, { .compatible = "qcom,pcie-sdm845", .data = &sdm845_cfg },
{ .compatible = "qcom,pcie-sm8250", .data = &sm8250_cfg }, { .compatible = "qcom,pcie-sm8250", .data = &sm8250_cfg },
{ .compatible = "qcom,pcie-sc8180x", .data = &sm8250_cfg }, { .compatible = "qcom,pcie-sc8180x", .data = &sm8250_cfg },
{ .compatible = "qcom,pcie-sm8450-pcie0", .data = &sm8450_pcie0_cfg },
{ .compatible = "qcom,pcie-sm8450-pcie1", .data = &sm8450_pcie1_cfg },
{ .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg }, { .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg },
{ } { }
}; };
static void qcom_fixup_class(struct pci_dev *dev) static void qcom_fixup_class(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class);

View File

@ -10,6 +10,7 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/iopoll.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/phy/phy.h> #include <linux/phy/phy.h>
@ -31,6 +32,17 @@
#define PCL_RSTCTRL2 0x0024 #define PCL_RSTCTRL2 0x0024
#define PCL_RSTCTRL_PHY_RESET BIT(0) #define PCL_RSTCTRL_PHY_RESET BIT(0)
#define PCL_PINCTRL0 0x002c
#define PCL_PERST_PLDN_REGEN BIT(12)
#define PCL_PERST_NOE_REGEN BIT(11)
#define PCL_PERST_OUT_REGEN BIT(8)
#define PCL_PERST_PLDN_REGVAL BIT(4)
#define PCL_PERST_NOE_REGVAL BIT(3)
#define PCL_PERST_OUT_REGVAL BIT(0)
#define PCL_PIPEMON 0x0044
#define PCL_PCLK_ALIVE BIT(15)
#define PCL_MODE 0x8000 #define PCL_MODE 0x8000
#define PCL_MODE_REGEN BIT(8) #define PCL_MODE_REGEN BIT(8)
#define PCL_MODE_REGVAL BIT(0) #define PCL_MODE_REGVAL BIT(0)
@ -51,6 +63,9 @@
#define PCL_APP_INTX 0x8074 #define PCL_APP_INTX 0x8074
#define PCL_APP_INTX_SYS_INT BIT(0) #define PCL_APP_INTX_SYS_INT BIT(0)
#define PCL_APP_PM0 0x8078
#define PCL_SYS_AUX_PWR_DET BIT(8)
/* assertion time of INTx in usec */ /* assertion time of INTx in usec */
#define PCL_INTX_WIDTH_USEC 30 #define PCL_INTX_WIDTH_USEC 30
@ -60,7 +75,14 @@ struct uniphier_pcie_ep_priv {
struct clk *clk, *clk_gio; struct clk *clk, *clk_gio;
struct reset_control *rst, *rst_gio; struct reset_control *rst, *rst_gio;
struct phy *phy; struct phy *phy;
const struct pci_epc_features *features; const struct uniphier_pcie_ep_soc_data *data;
};
struct uniphier_pcie_ep_soc_data {
bool has_gio;
void (*init)(struct uniphier_pcie_ep_priv *priv);
int (*wait)(struct uniphier_pcie_ep_priv *priv);
const struct pci_epc_features features;
}; };
#define to_uniphier_pcie(x) dev_get_drvdata((x)->dev) #define to_uniphier_pcie(x) dev_get_drvdata((x)->dev)
@ -91,7 +113,7 @@ static void uniphier_pcie_phy_reset(struct uniphier_pcie_ep_priv *priv,
writel(val, priv->base + PCL_RSTCTRL2); writel(val, priv->base + PCL_RSTCTRL2);
} }
static void uniphier_pcie_init_ep(struct uniphier_pcie_ep_priv *priv) static void uniphier_pcie_pro5_init_ep(struct uniphier_pcie_ep_priv *priv)
{ {
u32 val; u32 val;
@ -116,6 +138,55 @@ static void uniphier_pcie_init_ep(struct uniphier_pcie_ep_priv *priv)
msleep(100); msleep(100);
} }
static void uniphier_pcie_nx1_init_ep(struct uniphier_pcie_ep_priv *priv)
{
u32 val;
/* set EP mode */
val = readl(priv->base + PCL_MODE);
val |= PCL_MODE_REGEN | PCL_MODE_REGVAL;
writel(val, priv->base + PCL_MODE);
/* use auxiliary power detection */
val = readl(priv->base + PCL_APP_PM0);
val |= PCL_SYS_AUX_PWR_DET;
writel(val, priv->base + PCL_APP_PM0);
/* assert PERST# */
val = readl(priv->base + PCL_PINCTRL0);
val &= ~(PCL_PERST_NOE_REGVAL | PCL_PERST_OUT_REGVAL
| PCL_PERST_PLDN_REGVAL);
val |= PCL_PERST_NOE_REGEN | PCL_PERST_OUT_REGEN
| PCL_PERST_PLDN_REGEN;
writel(val, priv->base + PCL_PINCTRL0);
uniphier_pcie_ltssm_enable(priv, false);
usleep_range(100000, 200000);
/* deassert PERST# */
val = readl(priv->base + PCL_PINCTRL0);
val |= PCL_PERST_OUT_REGVAL | PCL_PERST_OUT_REGEN;
writel(val, priv->base + PCL_PINCTRL0);
}
static int uniphier_pcie_nx1_wait_ep(struct uniphier_pcie_ep_priv *priv)
{
u32 status;
int ret;
/* wait PIPE clock */
ret = readl_poll_timeout(priv->base + PCL_PIPEMON, status,
status & PCL_PCLK_ALIVE, 100000, 1000000);
if (ret) {
dev_err(priv->pci.dev,
"Failed to initialize controller in EP mode\n");
return ret;
}
return 0;
}
static int uniphier_pcie_start_link(struct dw_pcie *pci) static int uniphier_pcie_start_link(struct dw_pcie *pci)
{ {
struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci);
@ -209,7 +280,7 @@ uniphier_pcie_get_features(struct dw_pcie_ep *ep)
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci);
return priv->features; return &priv->data->features;
} }
static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = { static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = {
@ -238,7 +309,8 @@ static int uniphier_pcie_ep_enable(struct uniphier_pcie_ep_priv *priv)
if (ret) if (ret)
goto out_rst_assert; goto out_rst_assert;
uniphier_pcie_init_ep(priv); if (priv->data->init)
priv->data->init(priv);
uniphier_pcie_phy_reset(priv, true); uniphier_pcie_phy_reset(priv, true);
@ -248,8 +320,16 @@ static int uniphier_pcie_ep_enable(struct uniphier_pcie_ep_priv *priv)
uniphier_pcie_phy_reset(priv, false); uniphier_pcie_phy_reset(priv, false);
if (priv->data->wait) {
ret = priv->data->wait(priv);
if (ret)
goto out_phy_exit;
}
return 0; return 0;
out_phy_exit:
phy_exit(priv->phy);
out_rst_gio_assert: out_rst_gio_assert:
reset_control_assert(priv->rst_gio); reset_control_assert(priv->rst_gio);
out_rst_assert: out_rst_assert:
@ -277,8 +357,8 @@ static int uniphier_pcie_ep_probe(struct platform_device *pdev)
if (!priv) if (!priv)
return -ENOMEM; return -ENOMEM;
priv->features = of_device_get_match_data(dev); priv->data = of_device_get_match_data(dev);
if (WARN_ON(!priv->features)) if (WARN_ON(!priv->data))
return -EINVAL; return -EINVAL;
priv->pci.dev = dev; priv->pci.dev = dev;
@ -288,13 +368,15 @@ static int uniphier_pcie_ep_probe(struct platform_device *pdev)
if (IS_ERR(priv->base)) if (IS_ERR(priv->base))
return PTR_ERR(priv->base); return PTR_ERR(priv->base);
priv->clk_gio = devm_clk_get(dev, "gio"); if (priv->data->has_gio) {
if (IS_ERR(priv->clk_gio)) priv->clk_gio = devm_clk_get(dev, "gio");
return PTR_ERR(priv->clk_gio); if (IS_ERR(priv->clk_gio))
return PTR_ERR(priv->clk_gio);
priv->rst_gio = devm_reset_control_get_shared(dev, "gio"); priv->rst_gio = devm_reset_control_get_shared(dev, "gio");
if (IS_ERR(priv->rst_gio)) if (IS_ERR(priv->rst_gio))
return PTR_ERR(priv->rst_gio); return PTR_ERR(priv->rst_gio);
}
priv->clk = devm_clk_get(dev, "link"); priv->clk = devm_clk_get(dev, "link");
if (IS_ERR(priv->clk)) if (IS_ERR(priv->clk))
@ -321,13 +403,31 @@ static int uniphier_pcie_ep_probe(struct platform_device *pdev)
return dw_pcie_ep_init(&priv->pci.ep); return dw_pcie_ep_init(&priv->pci.ep);
} }
static const struct pci_epc_features uniphier_pro5_data = { static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = {
.linkup_notifier = false, .has_gio = true,
.msi_capable = true, .init = uniphier_pcie_pro5_init_ep,
.msix_capable = false, .wait = NULL,
.align = 1 << 16, .features = {
.bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), .linkup_notifier = false,
.reserved_bar = BIT(BAR_4), .msi_capable = true,
.msix_capable = false,
.align = 1 << 16,
.bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4),
.reserved_bar = BIT(BAR_4),
},
};
static const struct uniphier_pcie_ep_soc_data uniphier_nx1_data = {
.has_gio = false,
.init = uniphier_pcie_nx1_init_ep,
.wait = uniphier_pcie_nx1_wait_ep,
.features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,
.align = 1 << 12,
.bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4),
},
}; };
static const struct of_device_id uniphier_pcie_ep_match[] = { static const struct of_device_id uniphier_pcie_ep_match[] = {
@ -335,6 +435,10 @@ static const struct of_device_id uniphier_pcie_ep_match[] = {
.compatible = "socionext,uniphier-pro5-pcie-ep", .compatible = "socionext,uniphier-pro5-pcie-ep",
.data = &uniphier_pro5_data, .data = &uniphier_pro5_data,
}, },
{
.compatible = "socionext,uniphier-nx1-pcie-ep",
.data = &uniphier_nx1_data,
},
{ /* sentinel */ }, { /* sentinel */ },
}; };

View File

@ -295,7 +295,7 @@ int mobiveil_host_init(struct mobiveil_pcie *pcie, bool reinit)
/* fixup for PCIe class register */ /* fixup for PCIe class register */
value = mobiveil_csr_readl(pcie, PAB_INTP_AXI_PIO_CLASS); value = mobiveil_csr_readl(pcie, PAB_INTP_AXI_PIO_CLASS);
value &= 0xff; value &= 0xff;
value |= (PCI_CLASS_BRIDGE_PCI << 16); value |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
mobiveil_csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS); mobiveil_csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS);
return 0; return 0;

View File

@ -38,10 +38,6 @@
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6)
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7) #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7)
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8) #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8)
#define PCIE_CORE_INT_A_ASSERT_ENABLE 1
#define PCIE_CORE_INT_B_ASSERT_ENABLE 2
#define PCIE_CORE_INT_C_ASSERT_ENABLE 3
#define PCIE_CORE_INT_D_ASSERT_ENABLE 4
/* PIO registers base address and register offsets */ /* PIO registers base address and register offsets */
#define PIO_BASE_ADDR 0x4000 #define PIO_BASE_ADDR 0x4000
#define PIO_CTRL (PIO_BASE_ADDR + 0x0) #define PIO_CTRL (PIO_BASE_ADDR + 0x0)
@ -102,6 +98,10 @@
#define PCIE_MSG_PM_PME_MASK BIT(7) #define PCIE_MSG_PM_PME_MASK BIT(7)
#define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44) #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44)
#define PCIE_ISR0_MSI_INT_PENDING BIT(24) #define PCIE_ISR0_MSI_INT_PENDING BIT(24)
#define PCIE_ISR0_CORR_ERR BIT(11)
#define PCIE_ISR0_NFAT_ERR BIT(12)
#define PCIE_ISR0_FAT_ERR BIT(13)
#define PCIE_ISR0_ERR_MASK GENMASK(13, 11)
#define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val))
#define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val)) #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val))
#define PCIE_ISR0_ALL_MASK GENMASK(31, 0) #define PCIE_ISR0_ALL_MASK GENMASK(31, 0)
@ -272,17 +272,16 @@ struct advk_pcie {
u32 actions; u32 actions;
} wins[OB_WIN_COUNT]; } wins[OB_WIN_COUNT];
u8 wins_count; u8 wins_count;
int irq;
struct irq_domain *rp_irq_domain;
struct irq_domain *irq_domain; struct irq_domain *irq_domain;
struct irq_chip irq_chip; struct irq_chip irq_chip;
raw_spinlock_t irq_lock; raw_spinlock_t irq_lock;
struct irq_domain *msi_domain; struct irq_domain *msi_domain;
struct irq_domain *msi_inner_domain; struct irq_domain *msi_inner_domain;
struct irq_chip msi_bottom_irq_chip; raw_spinlock_t msi_irq_lock;
struct irq_chip msi_irq_chip;
struct msi_domain_info msi_domain_info;
DECLARE_BITMAP(msi_used, MSI_IRQ_NUM); DECLARE_BITMAP(msi_used, MSI_IRQ_NUM);
struct mutex msi_used_lock; struct mutex msi_used_lock;
u16 msi_msg;
int link_gen; int link_gen;
struct pci_bridge_emul bridge; struct pci_bridge_emul bridge;
struct gpio_desc *reset_gpio; struct gpio_desc *reset_gpio;
@ -477,6 +476,7 @@ static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num)
static void advk_pcie_setup_hw(struct advk_pcie *pcie) static void advk_pcie_setup_hw(struct advk_pcie *pcie)
{ {
phys_addr_t msi_addr;
u32 reg; u32 reg;
int i; int i;
@ -529,7 +529,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
*/ */
reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG); reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG);
reg &= ~0xffffff00; reg &= ~0xffffff00;
reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8; reg |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG); advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG);
/* Disable Root Bridge I/O space, memory space and bus mastering */ /* Disable Root Bridge I/O space, memory space and bus mastering */
@ -565,6 +565,11 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
reg |= LANE_COUNT_1; reg |= LANE_COUNT_1;
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
/* Set MSI address */
msi_addr = virt_to_phys(pcie);
advk_writel(pcie, lower_32_bits(msi_addr), PCIE_MSI_ADDR_LOW_REG);
advk_writel(pcie, upper_32_bits(msi_addr), PCIE_MSI_ADDR_HIGH_REG);
/* Enable MSI */ /* Enable MSI */
reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG); reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
reg |= PCIE_CORE_CTRL2_MSI_ENABLE; reg |= PCIE_CORE_CTRL2_MSI_ENABLE;
@ -576,15 +581,20 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG); advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG); advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
/* Disable All ISR0/1 Sources */ /* Disable All ISR0/1 and MSI Sources */
reg = PCIE_ISR0_ALL_MASK; advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_MASK_REG);
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG);
advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG);
/* Unmask summary MSI interrupt */
reg = advk_readl(pcie, PCIE_ISR0_MASK_REG);
reg &= ~PCIE_ISR0_MSI_INT_PENDING; reg &= ~PCIE_ISR0_MSI_INT_PENDING;
advk_writel(pcie, reg, PCIE_ISR0_MASK_REG); advk_writel(pcie, reg, PCIE_ISR0_MASK_REG);
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG); /* Unmask PME interrupt for processing of PME requester */
reg = advk_readl(pcie, PCIE_ISR0_MASK_REG);
/* Unmask all MSIs */ reg &= ~PCIE_MSG_PM_PME_MASK;
advk_writel(pcie, ~(u32)PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG); advk_writel(pcie, reg, PCIE_ISR0_MASK_REG);
/* Enable summary interrupt for GIC SPI source */ /* Enable summary interrupt for GIC SPI source */
reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK); reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
@ -778,11 +788,15 @@ advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
case PCI_INTERRUPT_LINE: { case PCI_INTERRUPT_LINE: {
/* /*
* From the whole 32bit register we support reading from HW only * From the whole 32bit register we support reading from HW only
* one bit: PCI_BRIDGE_CTL_BUS_RESET. * two bits: PCI_BRIDGE_CTL_BUS_RESET and PCI_BRIDGE_CTL_SERR.
* Other bits are retrieved only from emulated config buffer. * Other bits are retrieved only from emulated config buffer.
*/ */
__le32 *cfgspace = (__le32 *)&bridge->conf; __le32 *cfgspace = (__le32 *)&bridge->conf;
u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]); u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
if (advk_readl(pcie, PCIE_ISR0_MASK_REG) & PCIE_ISR0_ERR_MASK)
val &= ~(PCI_BRIDGE_CTL_SERR << 16);
else
val |= PCI_BRIDGE_CTL_SERR << 16;
if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN) if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN)
val |= PCI_BRIDGE_CTL_BUS_RESET << 16; val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
else else
@ -808,6 +822,19 @@ advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
break; break;
case PCI_INTERRUPT_LINE: case PCI_INTERRUPT_LINE:
/*
* According to Figure 6-3: Pseudo Logic Diagram for Error
* Message Controls in PCIe base specification, SERR# Enable bit
* in Bridge Control register enable receiving of ERR_* messages
*/
if (mask & (PCI_BRIDGE_CTL_SERR << 16)) {
u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
if (new & (PCI_BRIDGE_CTL_SERR << 16))
val &= ~PCIE_ISR0_ERR_MASK;
else
val |= PCIE_ISR0_ERR_MASK;
advk_writel(pcie, val, PCIE_ISR0_MASK_REG);
}
if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) { if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG); u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG);
if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16)) if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
@ -835,20 +862,11 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
*value = PCI_EXP_SLTSTA_PDS << 16; *value = PCI_EXP_SLTSTA_PDS << 16;
return PCI_BRIDGE_EMUL_HANDLED; return PCI_BRIDGE_EMUL_HANDLED;
case PCI_EXP_RTCTL: { /*
u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); * PCI_EXP_RTCTL and PCI_EXP_RTSTA are also supported, but do not need
*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; * to be handled here, because their values are stored in emulated
*value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE; * config space buffer, and we read them from there when needed.
*value |= PCI_EXP_RTCAP_CRSVIS << 16; */
return PCI_BRIDGE_EMUL_HANDLED;
}
case PCI_EXP_RTSTA: {
u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
*value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16);
return PCI_BRIDGE_EMUL_HANDLED;
}
case PCI_EXP_LNKCAP: { case PCI_EXP_LNKCAP: {
u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
@ -903,19 +921,18 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
break; break;
case PCI_EXP_RTCTL: { case PCI_EXP_RTCTL: {
/* Only mask/unmask PME interrupt */ u16 rootctl = le16_to_cpu(bridge->pcie_conf.rootctl);
u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG) & /* Only emulation of PMEIE and CRSSVE bits is provided */
~PCIE_MSG_PM_PME_MASK; rootctl &= PCI_EXP_RTCTL_PMEIE | PCI_EXP_RTCTL_CRSSVE;
if ((new & PCI_EXP_RTCTL_PMEIE) == 0) bridge->pcie_conf.rootctl = cpu_to_le16(rootctl);
val |= PCIE_MSG_PM_PME_MASK;
advk_writel(pcie, val, PCIE_ISR0_MASK_REG);
break; break;
} }
case PCI_EXP_RTSTA: /*
new = (new & PCI_EXP_RTSTA_PME) >> 9; * PCI_EXP_RTSTA is also supported, but does not need to be handled
advk_writel(pcie, new, PCIE_ISR0_REG); * here, because its value is stored in emulated config space buffer,
break; * and we write it there when needed.
*/
case PCI_EXP_DEVCTL: case PCI_EXP_DEVCTL:
case PCI_EXP_DEVCTL2: case PCI_EXP_DEVCTL2:
@ -928,7 +945,7 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
} }
} }
static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { static const struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
.read_base = advk_pci_bridge_emul_base_conf_read, .read_base = advk_pci_bridge_emul_base_conf_read,
.write_base = advk_pci_bridge_emul_base_conf_write, .write_base = advk_pci_bridge_emul_base_conf_write,
.read_pcie = advk_pci_bridge_emul_pcie_conf_read, .read_pcie = advk_pci_bridge_emul_pcie_conf_read,
@ -959,7 +976,7 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
bridge->conf.pref_mem_limit = cpu_to_le16(PCI_PREF_RANGE_TYPE_64); bridge->conf.pref_mem_limit = cpu_to_le16(PCI_PREF_RANGE_TYPE_64);
/* Support interrupt A for MSI feature */ /* Support interrupt A for MSI feature */
bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; bridge->conf.intpin = PCI_INTERRUPT_INTA;
/* Aardvark HW provides PCIe Capability structure in version 2 */ /* Aardvark HW provides PCIe Capability structure in version 2 */
bridge->pcie_conf.cap = cpu_to_le16(2); bridge->pcie_conf.cap = cpu_to_le16(2);
@ -981,8 +998,12 @@ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
return false; return false;
/* /*
* If the link goes down after we check for link-up, nothing bad * If the link goes down after we check for link-up, we have a problem:
* happens but the config access times out. * if a PIO request is executed while link-down, the whole controller
* gets stuck in a non-functional state, and even after link comes up
* again, PIO requests won't work anymore, and a reset of the whole PCIe
* controller is needed. Therefore we need to prevent sending PIO
* requests while the link is down.
*/ */
if (!pci_is_root_bus(bus) && !advk_pcie_link_up(pcie)) if (!pci_is_root_bus(bus) && !advk_pcie_link_up(pcie))
return false; return false;
@ -1180,11 +1201,11 @@ static void advk_msi_irq_compose_msi_msg(struct irq_data *data,
struct msi_msg *msg) struct msi_msg *msg)
{ {
struct advk_pcie *pcie = irq_data_get_irq_chip_data(data); struct advk_pcie *pcie = irq_data_get_irq_chip_data(data);
phys_addr_t msi_msg = virt_to_phys(&pcie->msi_msg); phys_addr_t msi_addr = virt_to_phys(pcie);
msg->address_lo = lower_32_bits(msi_msg); msg->address_lo = lower_32_bits(msi_addr);
msg->address_hi = upper_32_bits(msi_msg); msg->address_hi = upper_32_bits(msi_addr);
msg->data = data->irq; msg->data = data->hwirq;
} }
static int advk_msi_set_affinity(struct irq_data *irq_data, static int advk_msi_set_affinity(struct irq_data *irq_data,
@ -1193,6 +1214,54 @@ static int advk_msi_set_affinity(struct irq_data *irq_data,
return -EINVAL; return -EINVAL;
} }
static void advk_msi_irq_mask(struct irq_data *d)
{
struct advk_pcie *pcie = d->domain->host_data;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
unsigned long flags;
u32 mask;
raw_spin_lock_irqsave(&pcie->msi_irq_lock, flags);
mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
mask |= BIT(hwirq);
advk_writel(pcie, mask, PCIE_MSI_MASK_REG);
raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags);
}
static void advk_msi_irq_unmask(struct irq_data *d)
{
struct advk_pcie *pcie = d->domain->host_data;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
unsigned long flags;
u32 mask;
raw_spin_lock_irqsave(&pcie->msi_irq_lock, flags);
mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
mask &= ~BIT(hwirq);
advk_writel(pcie, mask, PCIE_MSI_MASK_REG);
raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags);
}
static void advk_msi_top_irq_mask(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void advk_msi_top_irq_unmask(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip advk_msi_bottom_irq_chip = {
.name = "MSI",
.irq_compose_msi_msg = advk_msi_irq_compose_msi_msg,
.irq_set_affinity = advk_msi_set_affinity,
.irq_mask = advk_msi_irq_mask,
.irq_unmask = advk_msi_irq_unmask,
};
static int advk_msi_irq_domain_alloc(struct irq_domain *domain, static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int virq,
unsigned int nr_irqs, void *args) unsigned int nr_irqs, void *args)
@ -1201,19 +1270,15 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
int hwirq, i; int hwirq, i;
mutex_lock(&pcie->msi_used_lock); mutex_lock(&pcie->msi_used_lock);
hwirq = bitmap_find_next_zero_area(pcie->msi_used, MSI_IRQ_NUM, hwirq = bitmap_find_free_region(pcie->msi_used, MSI_IRQ_NUM,
0, nr_irqs, 0); order_base_2(nr_irqs));
if (hwirq >= MSI_IRQ_NUM) {
mutex_unlock(&pcie->msi_used_lock);
return -ENOSPC;
}
bitmap_set(pcie->msi_used, hwirq, nr_irqs);
mutex_unlock(&pcie->msi_used_lock); mutex_unlock(&pcie->msi_used_lock);
if (hwirq < 0)
return -ENOSPC;
for (i = 0; i < nr_irqs; i++) for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, hwirq + i, irq_domain_set_info(domain, virq + i, hwirq + i,
&pcie->msi_bottom_irq_chip, &advk_msi_bottom_irq_chip,
domain->host_data, handle_simple_irq, domain->host_data, handle_simple_irq,
NULL, NULL); NULL, NULL);
@ -1227,7 +1292,7 @@ static void advk_msi_irq_domain_free(struct irq_domain *domain,
struct advk_pcie *pcie = domain->host_data; struct advk_pcie *pcie = domain->host_data;
mutex_lock(&pcie->msi_used_lock); mutex_lock(&pcie->msi_used_lock);
bitmap_clear(pcie->msi_used, d->hwirq, nr_irqs); bitmap_release_region(pcie->msi_used, d->hwirq, order_base_2(nr_irqs));
mutex_unlock(&pcie->msi_used_lock); mutex_unlock(&pcie->msi_used_lock);
} }
@ -1269,7 +1334,6 @@ static int advk_pcie_irq_map(struct irq_domain *h,
{ {
struct advk_pcie *pcie = h->host_data; struct advk_pcie *pcie = h->host_data;
advk_pcie_irq_mask(irq_get_irq_data(virq));
irq_set_status_flags(virq, IRQ_LEVEL); irq_set_status_flags(virq, IRQ_LEVEL);
irq_set_chip_and_handler(virq, &pcie->irq_chip, irq_set_chip_and_handler(virq, &pcie->irq_chip,
handle_level_irq); handle_level_irq);
@ -1283,37 +1347,25 @@ static const struct irq_domain_ops advk_pcie_irq_domain_ops = {
.xlate = irq_domain_xlate_onecell, .xlate = irq_domain_xlate_onecell,
}; };
static struct irq_chip advk_msi_irq_chip = {
.name = "advk-MSI",
.irq_mask = advk_msi_top_irq_mask,
.irq_unmask = advk_msi_top_irq_unmask,
};
static struct msi_domain_info advk_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
.chip = &advk_msi_irq_chip,
};
static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie) static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie)
{ {
struct device *dev = &pcie->pdev->dev; struct device *dev = &pcie->pdev->dev;
struct device_node *node = dev->of_node;
struct irq_chip *bottom_ic, *msi_ic;
struct msi_domain_info *msi_di;
phys_addr_t msi_msg_phys;
raw_spin_lock_init(&pcie->msi_irq_lock);
mutex_init(&pcie->msi_used_lock); mutex_init(&pcie->msi_used_lock);
bottom_ic = &pcie->msi_bottom_irq_chip;
bottom_ic->name = "MSI";
bottom_ic->irq_compose_msi_msg = advk_msi_irq_compose_msi_msg;
bottom_ic->irq_set_affinity = advk_msi_set_affinity;
msi_ic = &pcie->msi_irq_chip;
msi_ic->name = "advk-MSI";
msi_di = &pcie->msi_domain_info;
msi_di->flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI;
msi_di->chip = msi_ic;
msi_msg_phys = virt_to_phys(&pcie->msi_msg);
advk_writel(pcie, lower_32_bits(msi_msg_phys),
PCIE_MSI_ADDR_LOW_REG);
advk_writel(pcie, upper_32_bits(msi_msg_phys),
PCIE_MSI_ADDR_HIGH_REG);
pcie->msi_inner_domain = pcie->msi_inner_domain =
irq_domain_add_linear(NULL, MSI_IRQ_NUM, irq_domain_add_linear(NULL, MSI_IRQ_NUM,
&advk_msi_domain_ops, pcie); &advk_msi_domain_ops, pcie);
@ -1321,8 +1373,9 @@ static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie)
return -ENOMEM; return -ENOMEM;
pcie->msi_domain = pcie->msi_domain =
pci_msi_create_irq_domain(of_node_to_fwnode(node), pci_msi_create_irq_domain(dev_fwnode(dev),
msi_di, pcie->msi_inner_domain); &advk_msi_domain_info,
pcie->msi_inner_domain);
if (!pcie->msi_domain) { if (!pcie->msi_domain) {
irq_domain_remove(pcie->msi_inner_domain); irq_domain_remove(pcie->msi_inner_domain);
return -ENOMEM; return -ENOMEM;
@ -1363,7 +1416,6 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
} }
irq_chip->irq_mask = advk_pcie_irq_mask; irq_chip->irq_mask = advk_pcie_irq_mask;
irq_chip->irq_mask_ack = advk_pcie_irq_mask;
irq_chip->irq_unmask = advk_pcie_irq_unmask; irq_chip->irq_unmask = advk_pcie_irq_unmask;
pcie->irq_domain = pcie->irq_domain =
@ -1385,10 +1437,73 @@ static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)
irq_domain_remove(pcie->irq_domain); irq_domain_remove(pcie->irq_domain);
} }
static struct irq_chip advk_rp_irq_chip = {
.name = "advk-RP",
};
static int advk_pcie_rp_irq_map(struct irq_domain *h,
unsigned int virq, irq_hw_number_t hwirq)
{
struct advk_pcie *pcie = h->host_data;
irq_set_chip_and_handler(virq, &advk_rp_irq_chip, handle_simple_irq);
irq_set_chip_data(virq, pcie);
return 0;
}
static const struct irq_domain_ops advk_pcie_rp_irq_domain_ops = {
.map = advk_pcie_rp_irq_map,
.xlate = irq_domain_xlate_onecell,
};
static int advk_pcie_init_rp_irq_domain(struct advk_pcie *pcie)
{
pcie->rp_irq_domain = irq_domain_add_linear(NULL, 1,
&advk_pcie_rp_irq_domain_ops,
pcie);
if (!pcie->rp_irq_domain) {
dev_err(&pcie->pdev->dev, "Failed to add Root Port IRQ domain\n");
return -ENOMEM;
}
return 0;
}
static void advk_pcie_remove_rp_irq_domain(struct advk_pcie *pcie)
{
irq_domain_remove(pcie->rp_irq_domain);
}
static void advk_pcie_handle_pme(struct advk_pcie *pcie)
{
u32 requester = advk_readl(pcie, PCIE_MSG_LOG_REG) >> 16;
advk_writel(pcie, PCIE_MSG_PM_PME_MASK, PCIE_ISR0_REG);
/*
* PCIE_MSG_LOG_REG contains the last inbound message, so store
* the requester ID only when PME was not asserted yet.
* Also do not trigger PME interrupt when PME is still asserted.
*/
if (!(le32_to_cpu(pcie->bridge.pcie_conf.rootsta) & PCI_EXP_RTSTA_PME)) {
pcie->bridge.pcie_conf.rootsta = cpu_to_le32(requester | PCI_EXP_RTSTA_PME);
/*
* Trigger PME interrupt only if PMEIE bit in Root Control is set.
* Aardvark HW returns zero for PCI_EXP_FLAGS_IRQ, so use PCIe interrupt 0.
*/
if (!(le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & PCI_EXP_RTCTL_PMEIE))
return;
if (generic_handle_domain_irq(pcie->rp_irq_domain, 0) == -EINVAL)
dev_err_ratelimited(&pcie->pdev->dev, "unhandled PME IRQ\n");
}
}
static void advk_pcie_handle_msi(struct advk_pcie *pcie) static void advk_pcie_handle_msi(struct advk_pcie *pcie)
{ {
u32 msi_val, msi_mask, msi_status, msi_idx; u32 msi_val, msi_mask, msi_status, msi_idx;
u16 msi_data;
msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG); msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG); msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG);
@ -1398,13 +1513,9 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
if (!(BIT(msi_idx) & msi_status)) if (!(BIT(msi_idx) & msi_status))
continue; continue;
/*
* msi_idx contains bits [4:0] of the msi_data and msi_data
* contains 16bit MSI interrupt number
*/
advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG); advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK; if (generic_handle_domain_irq(pcie->msi_inner_domain, msi_idx) == -EINVAL)
generic_handle_irq(msi_data); dev_err_ratelimited(&pcie->pdev->dev, "unexpected MSI 0x%02x\n", msi_idx);
} }
advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING, advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING,
@ -1425,6 +1536,22 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK);
/* Process PME interrupt as the first one to do not miss PME requester id */
if (isr0_status & PCIE_MSG_PM_PME_MASK)
advk_pcie_handle_pme(pcie);
/* Process ERR interrupt */
if (isr0_status & PCIE_ISR0_ERR_MASK) {
advk_writel(pcie, PCIE_ISR0_ERR_MASK, PCIE_ISR0_REG);
/*
* Aardvark HW returns zero for PCI_ERR_ROOT_AER_IRQ, so use
* PCIe interrupt 0
*/
if (generic_handle_domain_irq(pcie->rp_irq_domain, 0) == -EINVAL)
dev_err_ratelimited(&pcie->pdev->dev, "unhandled ERR IRQ\n");
}
/* Process MSI interrupts */ /* Process MSI interrupts */
if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) if (isr0_status & PCIE_ISR0_MSI_INT_PENDING)
advk_pcie_handle_msi(pcie); advk_pcie_handle_msi(pcie);
@ -1437,28 +1564,50 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i), advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i),
PCIE_ISR1_REG); PCIE_ISR1_REG);
generic_handle_domain_irq(pcie->irq_domain, i); if (generic_handle_domain_irq(pcie->irq_domain, i) == -EINVAL)
dev_err_ratelimited(&pcie->pdev->dev, "unexpected INT%c IRQ\n",
(char)i + 'A');
} }
} }
static irqreturn_t advk_pcie_irq_handler(int irq, void *arg) static void advk_pcie_irq_handler(struct irq_desc *desc)
{ {
struct advk_pcie *pcie = arg; struct advk_pcie *pcie = irq_desc_get_handler_data(desc);
u32 status; struct irq_chip *chip = irq_desc_get_chip(desc);
u32 val, mask, status;
status = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG); chained_irq_enter(chip, desc);
if (!(status & PCIE_IRQ_CORE_INT))
return IRQ_NONE;
advk_pcie_handle_int(pcie); val = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG);
mask = advk_readl(pcie, HOST_CTRL_INT_MASK_REG);
status = val & ((~mask) & PCIE_IRQ_ALL_MASK);
/* Clear interrupt */ if (status & PCIE_IRQ_CORE_INT) {
advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG); advk_pcie_handle_int(pcie);
return IRQ_HANDLED; /* Clear interrupt */
advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG);
}
chained_irq_exit(chip, desc);
} }
static void __maybe_unused advk_pcie_disable_phy(struct advk_pcie *pcie) static int advk_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
struct advk_pcie *pcie = dev->bus->sysdata;
/*
* Emulated root bridge has its own emulated irq chip and irq domain.
* Argument pin is the INTx pin (1=INTA, 2=INTB, 3=INTC, 4=INTD) and
* hwirq for irq_create_mapping() is indexed from zero.
*/
if (pci_is_root_bus(dev->bus))
return irq_create_mapping(pcie->rp_irq_domain, pin - 1);
else
return of_irq_parse_and_map_pci(dev, slot, pin);
}
static void advk_pcie_disable_phy(struct advk_pcie *pcie)
{ {
phy_power_off(pcie->phy); phy_power_off(pcie->phy);
phy_exit(pcie->phy); phy_exit(pcie->phy);
@ -1522,7 +1671,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
struct advk_pcie *pcie; struct advk_pcie *pcie;
struct pci_host_bridge *bridge; struct pci_host_bridge *bridge;
struct resource_entry *entry; struct resource_entry *entry;
int ret, irq; int ret;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie)); bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
if (!bridge) if (!bridge)
@ -1608,17 +1757,9 @@ static int advk_pcie_probe(struct platform_device *pdev)
if (IS_ERR(pcie->base)) if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base); return PTR_ERR(pcie->base);
irq = platform_get_irq(pdev, 0); pcie->irq = platform_get_irq(pdev, 0);
if (irq < 0) if (pcie->irq < 0)
return irq; return pcie->irq;
ret = devm_request_irq(dev, irq, advk_pcie_irq_handler,
IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie",
pcie);
if (ret) {
dev_err(dev, "Failed to register interrupt\n");
return ret;
}
pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node, pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
"reset-gpios", 0, "reset-gpios", 0,
@ -1667,11 +1808,24 @@ static int advk_pcie_probe(struct platform_device *pdev)
return ret; return ret;
} }
ret = advk_pcie_init_rp_irq_domain(pcie);
if (ret) {
dev_err(dev, "Failed to initialize irq\n");
advk_pcie_remove_msi_irq_domain(pcie);
advk_pcie_remove_irq_domain(pcie);
return ret;
}
irq_set_chained_handler_and_data(pcie->irq, advk_pcie_irq_handler, pcie);
bridge->sysdata = pcie; bridge->sysdata = pcie;
bridge->ops = &advk_pcie_ops; bridge->ops = &advk_pcie_ops;
bridge->map_irq = advk_pcie_map_irq;
ret = pci_host_probe(bridge); ret = pci_host_probe(bridge);
if (ret < 0) { if (ret < 0) {
irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
advk_pcie_remove_rp_irq_domain(pcie);
advk_pcie_remove_msi_irq_domain(pcie); advk_pcie_remove_msi_irq_domain(pcie);
advk_pcie_remove_irq_domain(pcie); advk_pcie_remove_irq_domain(pcie);
return ret; return ret;
@ -1719,7 +1873,11 @@ static int advk_pcie_remove(struct platform_device *pdev)
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG); advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG); advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
/* Remove IRQ handler */
irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
/* Remove IRQ domains */ /* Remove IRQ domains */
advk_pcie_remove_rp_irq_domain(pcie);
advk_pcie_remove_msi_irq_domain(pcie); advk_pcie_remove_msi_irq_domain(pcie);
advk_pcie_remove_irq_domain(pcie); advk_pcie_remove_irq_domain(pcie);

View File

@ -616,6 +616,121 @@ static int hv_msi_prepare(struct irq_domain *domain, struct device *dev,
{ {
return pci_msi_prepare(domain, dev, nvec, info); return pci_msi_prepare(domain, dev, nvec, info);
} }
/**
* hv_arch_irq_unmask() - "Unmask" the IRQ by setting its current
* affinity.
* @data: Describes the IRQ
*
* Build new a destination for the MSI and make a hypercall to
* update the Interrupt Redirection Table. "Device Logical ID"
* is built out of this PCI bus's instance GUID and the function
* number of the device.
*/
static void hv_arch_irq_unmask(struct irq_data *data)
{
struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
struct hv_retarget_device_interrupt *params;
struct hv_pcibus_device *hbus;
struct cpumask *dest;
cpumask_var_t tmp;
struct pci_bus *pbus;
struct pci_dev *pdev;
unsigned long flags;
u32 var_size = 0;
int cpu, nr_bank;
u64 res;
dest = irq_data_get_effective_affinity_mask(data);
pdev = msi_desc_to_pci_dev(msi_desc);
pbus = pdev->bus;
hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
params = &hbus->retarget_msi_interrupt_params;
memset(params, 0, sizeof(*params));
params->partition_id = HV_PARTITION_ID_SELF;
params->int_entry.source = HV_INTERRUPT_SOURCE_MSI;
hv_set_msi_entry_from_desc(&params->int_entry.msi_entry, msi_desc);
params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
(hbus->hdev->dev_instance.b[4] << 16) |
(hbus->hdev->dev_instance.b[7] << 8) |
(hbus->hdev->dev_instance.b[6] & 0xf8) |
PCI_FUNC(pdev->devfn);
params->int_target.vector = hv_msi_get_int_vector(data);
/*
* Honoring apic->delivery_mode set to APIC_DELIVERY_MODE_FIXED by
* setting the HV_DEVICE_INTERRUPT_TARGET_MULTICAST flag results in a
* spurious interrupt storm. Not doing so does not seem to have a
* negative effect (yet?).
*/
if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) {
/*
* PCI_PROTOCOL_VERSION_1_2 supports the VP_SET version of the
* HVCALL_RETARGET_INTERRUPT hypercall, which also coincides
* with >64 VP support.
* ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED
* is not sufficient for this hypercall.
*/
params->int_target.flags |=
HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET;
if (!alloc_cpumask_var(&tmp, GFP_ATOMIC)) {
res = 1;
goto exit_unlock;
}
cpumask_and(tmp, dest, cpu_online_mask);
nr_bank = cpumask_to_vpset(&params->int_target.vp_set, tmp);
free_cpumask_var(tmp);
if (nr_bank <= 0) {
res = 1;
goto exit_unlock;
}
/*
* var-sized hypercall, var-size starts after vp_mask (thus
* vp_set.format does not count, but vp_set.valid_bank_mask
* does).
*/
var_size = 1 + nr_bank;
} else {
for_each_cpu_and(cpu, dest, cpu_online_mask) {
params->int_target.vp_mask |=
(1ULL << hv_cpu_number_to_vp_number(cpu));
}
}
res = hv_do_hypercall(HVCALL_RETARGET_INTERRUPT | (var_size << 17),
params, NULL);
exit_unlock:
spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags);
/*
* During hibernation, when a CPU is offlined, the kernel tries
* to move the interrupt to the remaining CPUs that haven't
* been offlined yet. In this case, the below hv_do_hypercall()
* always fails since the vmbus channel has been closed:
* refer to cpu_disable_common() -> fixup_irqs() ->
* irq_migrate_all_off_this_cpu() -> migrate_one_irq().
*
* Suppress the error message for hibernation because the failure
* during hibernation does not matter (at this time all the devices
* have been frozen). Note: the correct affinity info is still updated
* into the irqdata data structure in migrate_one_irq() ->
* irq_do_set_affinity() -> hv_set_affinity(), so later when the VM
* resumes, hv_pci_restore_msi_state() is able to correctly restore
* the interrupt with the correct affinity.
*/
if (!hv_result_success(res) && hbus->state != hv_pcibus_removing)
dev_err(&hbus->hdev->device,
"%s() failed: %#llx", __func__, res);
}
#elif defined(CONFIG_ARM64) #elif defined(CONFIG_ARM64)
/* /*
* SPI vectors to use for vPCI; arch SPIs range is [32, 1019], but leaving a bit * SPI vectors to use for vPCI; arch SPIs range is [32, 1019], but leaving a bit
@ -839,6 +954,12 @@ static struct irq_domain *hv_pci_get_root_domain(void)
{ {
return hv_msi_gic_irq_domain; return hv_msi_gic_irq_domain;
} }
/*
* SPIs are used for interrupts of PCI devices and SPIs is managed via GICD
* registers which Hyper-V already supports, so no hypercall needed.
*/
static void hv_arch_irq_unmask(struct irq_data *data) { }
#endif /* CONFIG_ARM64 */ #endif /* CONFIG_ARM64 */
/** /**
@ -1456,119 +1577,9 @@ static void hv_irq_mask(struct irq_data *data)
irq_chip_mask_parent(data); irq_chip_mask_parent(data);
} }
/**
* hv_irq_unmask() - "Unmask" the IRQ by setting its current
* affinity.
* @data: Describes the IRQ
*
* Build new a destination for the MSI and make a hypercall to
* update the Interrupt Redirection Table. "Device Logical ID"
* is built out of this PCI bus's instance GUID and the function
* number of the device.
*/
static void hv_irq_unmask(struct irq_data *data) static void hv_irq_unmask(struct irq_data *data)
{ {
struct msi_desc *msi_desc = irq_data_get_msi_desc(data); hv_arch_irq_unmask(data);
struct hv_retarget_device_interrupt *params;
struct hv_pcibus_device *hbus;
struct cpumask *dest;
cpumask_var_t tmp;
struct pci_bus *pbus;
struct pci_dev *pdev;
unsigned long flags;
u32 var_size = 0;
int cpu, nr_bank;
u64 res;
dest = irq_data_get_effective_affinity_mask(data);
pdev = msi_desc_to_pci_dev(msi_desc);
pbus = pdev->bus;
hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
params = &hbus->retarget_msi_interrupt_params;
memset(params, 0, sizeof(*params));
params->partition_id = HV_PARTITION_ID_SELF;
params->int_entry.source = HV_INTERRUPT_SOURCE_MSI;
hv_set_msi_entry_from_desc(&params->int_entry.msi_entry, msi_desc);
params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
(hbus->hdev->dev_instance.b[4] << 16) |
(hbus->hdev->dev_instance.b[7] << 8) |
(hbus->hdev->dev_instance.b[6] & 0xf8) |
PCI_FUNC(pdev->devfn);
params->int_target.vector = hv_msi_get_int_vector(data);
/*
* Honoring apic->delivery_mode set to APIC_DELIVERY_MODE_FIXED by
* setting the HV_DEVICE_INTERRUPT_TARGET_MULTICAST flag results in a
* spurious interrupt storm. Not doing so does not seem to have a
* negative effect (yet?).
*/
if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) {
/*
* PCI_PROTOCOL_VERSION_1_2 supports the VP_SET version of the
* HVCALL_RETARGET_INTERRUPT hypercall, which also coincides
* with >64 VP support.
* ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED
* is not sufficient for this hypercall.
*/
params->int_target.flags |=
HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET;
if (!alloc_cpumask_var(&tmp, GFP_ATOMIC)) {
res = 1;
goto exit_unlock;
}
cpumask_and(tmp, dest, cpu_online_mask);
nr_bank = cpumask_to_vpset(&params->int_target.vp_set, tmp);
free_cpumask_var(tmp);
if (nr_bank <= 0) {
res = 1;
goto exit_unlock;
}
/*
* var-sized hypercall, var-size starts after vp_mask (thus
* vp_set.format does not count, but vp_set.valid_bank_mask
* does).
*/
var_size = 1 + nr_bank;
} else {
for_each_cpu_and(cpu, dest, cpu_online_mask) {
params->int_target.vp_mask |=
(1ULL << hv_cpu_number_to_vp_number(cpu));
}
}
res = hv_do_hypercall(HVCALL_RETARGET_INTERRUPT | (var_size << 17),
params, NULL);
exit_unlock:
spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags);
/*
* During hibernation, when a CPU is offlined, the kernel tries
* to move the interrupt to the remaining CPUs that haven't
* been offlined yet. In this case, the below hv_do_hypercall()
* always fails since the vmbus channel has been closed:
* refer to cpu_disable_common() -> fixup_irqs() ->
* irq_migrate_all_off_this_cpu() -> migrate_one_irq().
*
* Suppress the error message for hibernation because the failure
* during hibernation does not matter (at this time all the devices
* have been frozen). Note: the correct affinity info is still updated
* into the irqdata data structure in migrate_one_irq() ->
* irq_do_set_affinity() -> hv_set_affinity(), so later when the VM
* resumes, hv_pci_restore_msi_state() is able to correctly restore
* the interrupt with the correct affinity.
*/
if (!hv_result_success(res) && hbus->state != hv_pcibus_removing)
dev_err(&hbus->hdev->device,
"%s() failed: %#llx", __func__, res);
if (data->parent_data->chip->irq_unmask) if (data->parent_data->chip->irq_unmask)
irq_chip_unmask_parent(data); irq_chip_unmask_parent(data);

View File

@ -35,7 +35,7 @@ struct loongson_pci {
/* Fixup wrong class code in PCIe bridges */ /* Fixup wrong class code in PCIe bridges */
static void bridge_class_quirk(struct pci_dev *dev) static void bridge_class_quirk(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
DEV_PCIE_PORT_0, bridge_class_quirk); DEV_PCIE_PORT_0, bridge_class_quirk);

View File

@ -32,8 +32,9 @@
#define PCIE_DEV_REV_OFF 0x0008 #define PCIE_DEV_REV_OFF 0x0008
#define PCIE_BAR_LO_OFF(n) (0x0010 + ((n) << 3)) #define PCIE_BAR_LO_OFF(n) (0x0010 + ((n) << 3))
#define PCIE_BAR_HI_OFF(n) (0x0014 + ((n) << 3)) #define PCIE_BAR_HI_OFF(n) (0x0014 + ((n) << 3))
#define PCIE_SSDEV_ID_OFF 0x002c
#define PCIE_CAP_PCIEXP 0x0060 #define PCIE_CAP_PCIEXP 0x0060
#define PCIE_HEADER_LOG_4_OFF 0x0128 #define PCIE_CAP_PCIERR_OFF 0x0100
#define PCIE_BAR_CTRL_OFF(n) (0x1804 + (((n) - 1) * 4)) #define PCIE_BAR_CTRL_OFF(n) (0x1804 + (((n) - 1) * 4))
#define PCIE_WIN04_CTRL_OFF(n) (0x1820 + ((n) << 4)) #define PCIE_WIN04_CTRL_OFF(n) (0x1820 + ((n) << 4))
#define PCIE_WIN04_BASE_OFF(n) (0x1824 + ((n) << 4)) #define PCIE_WIN04_BASE_OFF(n) (0x1824 + ((n) << 4))
@ -53,9 +54,10 @@
PCIE_CONF_ADDR_EN) PCIE_CONF_ADDR_EN)
#define PCIE_CONF_DATA_OFF 0x18fc #define PCIE_CONF_DATA_OFF 0x18fc
#define PCIE_INT_CAUSE_OFF 0x1900 #define PCIE_INT_CAUSE_OFF 0x1900
#define PCIE_INT_UNMASK_OFF 0x1910
#define PCIE_INT_INTX(i) BIT(24+i)
#define PCIE_INT_PM_PME BIT(28) #define PCIE_INT_PM_PME BIT(28)
#define PCIE_MASK_OFF 0x1910 #define PCIE_INT_ALL_MASK GENMASK(31, 0)
#define PCIE_MASK_ENABLE_INTS 0x0f000000
#define PCIE_CTRL_OFF 0x1a00 #define PCIE_CTRL_OFF 0x1a00
#define PCIE_CTRL_X1_MODE 0x0001 #define PCIE_CTRL_X1_MODE 0x0001
#define PCIE_CTRL_RC_MODE BIT(1) #define PCIE_CTRL_RC_MODE BIT(1)
@ -93,6 +95,7 @@ struct mvebu_pcie_port {
void __iomem *base; void __iomem *base;
u32 port; u32 port;
u32 lane; u32 lane;
bool is_x4;
int devfn; int devfn;
unsigned int mem_target; unsigned int mem_target;
unsigned int mem_attr; unsigned int mem_attr;
@ -108,6 +111,9 @@ struct mvebu_pcie_port {
struct mvebu_pcie_window iowin; struct mvebu_pcie_window iowin;
u32 saved_pcie_stat; u32 saved_pcie_stat;
struct resource regs; struct resource regs;
struct irq_domain *intx_irq_domain;
raw_spinlock_t irq_lock;
int intx_irq;
}; };
static inline void mvebu_writel(struct mvebu_pcie_port *port, u32 val, u32 reg) static inline void mvebu_writel(struct mvebu_pcie_port *port, u32 val, u32 reg)
@ -233,13 +239,25 @@ static void mvebu_pcie_setup_wins(struct mvebu_pcie_port *port)
static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port) static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port)
{ {
u32 ctrl, cmd, dev_rev, mask; u32 ctrl, lnkcap, cmd, dev_rev, unmask;
/* Setup PCIe controller to Root Complex mode. */ /* Setup PCIe controller to Root Complex mode. */
ctrl = mvebu_readl(port, PCIE_CTRL_OFF); ctrl = mvebu_readl(port, PCIE_CTRL_OFF);
ctrl |= PCIE_CTRL_RC_MODE; ctrl |= PCIE_CTRL_RC_MODE;
mvebu_writel(port, ctrl, PCIE_CTRL_OFF); mvebu_writel(port, ctrl, PCIE_CTRL_OFF);
/*
* Set Maximum Link Width to X1 or X4 in Root Port's PCIe Link
* Capability register. This register is defined by PCIe specification
* as read-only but this mvebu controller has it as read-write and must
* be set to number of SerDes PCIe lanes (1 or 4). If this register is
* not set correctly then link with endpoint card is not established.
*/
lnkcap = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP);
lnkcap &= ~PCI_EXP_LNKCAP_MLW;
lnkcap |= (port->is_x4 ? 4 : 1) << 4;
mvebu_writel(port, lnkcap, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP);
/* Disable Root Bridge I/O space, memory space and bus mastering. */ /* Disable Root Bridge I/O space, memory space and bus mastering. */
cmd = mvebu_readl(port, PCIE_CMD_OFF); cmd = mvebu_readl(port, PCIE_CMD_OFF);
cmd &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); cmd &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
@ -268,23 +286,57 @@ static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port)
*/ */
dev_rev = mvebu_readl(port, PCIE_DEV_REV_OFF); dev_rev = mvebu_readl(port, PCIE_DEV_REV_OFF);
dev_rev &= ~0xffffff00; dev_rev &= ~0xffffff00;
dev_rev |= (PCI_CLASS_BRIDGE_PCI << 8) << 8; dev_rev |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
mvebu_writel(port, dev_rev, PCIE_DEV_REV_OFF); mvebu_writel(port, dev_rev, PCIE_DEV_REV_OFF);
/* Point PCIe unit MBUS decode windows to DRAM space. */ /* Point PCIe unit MBUS decode windows to DRAM space. */
mvebu_pcie_setup_wins(port); mvebu_pcie_setup_wins(port);
/* Enable interrupt lines A-D. */ /* Mask all interrupt sources. */
mask = mvebu_readl(port, PCIE_MASK_OFF); mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_UNMASK_OFF);
mask |= PCIE_MASK_ENABLE_INTS;
mvebu_writel(port, mask, PCIE_MASK_OFF); /* Clear all interrupt causes. */
mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
/* Check if "intx" interrupt was specified in DT. */
if (port->intx_irq > 0)
return;
/*
* Fallback code when "intx" interrupt was not specified in DT:
* Unmask all legacy INTx interrupts as driver does not provide a way
* for masking and unmasking of individual legacy INTx interrupts.
* Legacy INTx are reported via one shared GIC source and therefore
* kernel cannot distinguish which individual legacy INTx was triggered.
* These interrupts are shared, so it should not cause any issue. Just
* performance penalty as every PCIe interrupt handler needs to be
* called when some interrupt is triggered.
*/
unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
unmask |= PCIE_INT_INTX(0) | PCIE_INT_INTX(1) |
PCIE_INT_INTX(2) | PCIE_INT_INTX(3);
mvebu_writel(port, unmask, PCIE_INT_UNMASK_OFF);
} }
static int mvebu_pcie_hw_rd_conf(struct mvebu_pcie_port *port, static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie,
struct pci_bus *bus, struct pci_bus *bus,
u32 devfn, int where, int size, u32 *val) int devfn);
static int mvebu_pcie_child_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
{ {
void __iomem *conf_data = port->base + PCIE_CONF_DATA_OFF; struct mvebu_pcie *pcie = bus->sysdata;
struct mvebu_pcie_port *port;
void __iomem *conf_data;
port = mvebu_pcie_find_port(pcie, bus, devfn);
if (!port)
return PCIBIOS_DEVICE_NOT_FOUND;
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
conf_data = port->base + PCIE_CONF_DATA_OFF;
mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where), mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where),
PCIE_CONF_ADDR_OFF); PCIE_CONF_ADDR_OFF);
@ -300,18 +352,27 @@ static int mvebu_pcie_hw_rd_conf(struct mvebu_pcie_port *port,
*val = readl_relaxed(conf_data); *val = readl_relaxed(conf_data);
break; break;
default: default:
*val = 0xffffffff;
return PCIBIOS_BAD_REGISTER_NUMBER; return PCIBIOS_BAD_REGISTER_NUMBER;
} }
return PCIBIOS_SUCCESSFUL; return PCIBIOS_SUCCESSFUL;
} }
static int mvebu_pcie_hw_wr_conf(struct mvebu_pcie_port *port, static int mvebu_pcie_child_wr_conf(struct pci_bus *bus, u32 devfn,
struct pci_bus *bus, int where, int size, u32 val)
u32 devfn, int where, int size, u32 val)
{ {
void __iomem *conf_data = port->base + PCIE_CONF_DATA_OFF; struct mvebu_pcie *pcie = bus->sysdata;
struct mvebu_pcie_port *port;
void __iomem *conf_data;
port = mvebu_pcie_find_port(pcie, bus, devfn);
if (!port)
return PCIBIOS_DEVICE_NOT_FOUND;
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
conf_data = port->base + PCIE_CONF_DATA_OFF;
mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where), mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where),
PCIE_CONF_ADDR_OFF); PCIE_CONF_ADDR_OFF);
@ -333,6 +394,11 @@ static int mvebu_pcie_hw_wr_conf(struct mvebu_pcie_port *port,
return PCIBIOS_SUCCESSFUL; return PCIBIOS_SUCCESSFUL;
} }
static struct pci_ops mvebu_pcie_child_ops = {
.read = mvebu_pcie_child_rd_conf,
.write = mvebu_pcie_child_wr_conf,
};
/* /*
* Remove windows, starting from the largest ones to the smallest * Remove windows, starting from the largest ones to the smallest
* ones. * ones.
@ -438,12 +504,6 @@ static int mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
return mvebu_pcie_set_window(port, port->io_target, port->io_attr, return mvebu_pcie_set_window(port, port->io_target, port->io_attr,
&desired, &port->iowin); &desired, &port->iowin);
if (!mvebu_has_ioport(port)) {
dev_WARN(&port->pcie->pdev->dev,
"Attempt to set IO when IO is disabled\n");
return -EOPNOTSUPP;
}
/* /*
* We read the PCI-to-PCI bridge emulated registers, and * We read the PCI-to-PCI bridge emulated registers, and
* calculate the base address and size of the address decoding * calculate the base address and size of the address decoding
@ -552,15 +612,20 @@ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
case PCI_EXP_LNKCAP: case PCI_EXP_LNKCAP:
/* /*
* PCIe requires the clock power management capability to be * PCIe requires that the Clock Power Management capability bit
* hard-wired to zero for downstream ports * is hard-wired to zero for downstream ports but HW returns 1.
* Additionally enable Data Link Layer Link Active Reporting
* Capable bit as DL_Active indication is provided too.
*/ */
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP) & *value = (mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP) &
~PCI_EXP_LNKCAP_CLKPM; ~PCI_EXP_LNKCAP_CLKPM) | PCI_EXP_LNKCAP_DLLLARC;
break; break;
case PCI_EXP_LNKCTL: case PCI_EXP_LNKCTL:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); /* DL_Active indication is provided via PCIE_STAT_OFF */
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL) |
(mvebu_pcie_link_up(port) ?
(PCI_EXP_LNKSTA_DLLLA << 16) : 0);
break; break;
case PCI_EXP_SLTCTL: case PCI_EXP_SLTCTL:
@ -590,6 +655,37 @@ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
return PCI_BRIDGE_EMUL_HANDLED; return PCI_BRIDGE_EMUL_HANDLED;
} }
static pci_bridge_emul_read_status_t
mvebu_pci_bridge_emul_ext_conf_read(struct pci_bridge_emul *bridge,
int reg, u32 *value)
{
struct mvebu_pcie_port *port = bridge->data;
switch (reg) {
case 0:
case PCI_ERR_UNCOR_STATUS:
case PCI_ERR_UNCOR_MASK:
case PCI_ERR_UNCOR_SEVER:
case PCI_ERR_COR_STATUS:
case PCI_ERR_COR_MASK:
case PCI_ERR_CAP:
case PCI_ERR_HEADER_LOG+0:
case PCI_ERR_HEADER_LOG+4:
case PCI_ERR_HEADER_LOG+8:
case PCI_ERR_HEADER_LOG+12:
case PCI_ERR_ROOT_COMMAND:
case PCI_ERR_ROOT_STATUS:
case PCI_ERR_ROOT_ERR_SRC:
*value = mvebu_readl(port, PCIE_CAP_PCIERR_OFF + reg);
break;
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
return PCI_BRIDGE_EMUL_HANDLED;
}
static void static void
mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge, mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
int reg, u32 old, u32 new, u32 mask) int reg, u32 old, u32 new, u32 mask)
@ -599,24 +695,18 @@ mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
switch (reg) { switch (reg) {
case PCI_COMMAND: case PCI_COMMAND:
if (!mvebu_has_ioport(port)) {
conf->command = cpu_to_le16(
le16_to_cpu(conf->command) & ~PCI_COMMAND_IO);
new &= ~PCI_COMMAND_IO;
}
mvebu_writel(port, new, PCIE_CMD_OFF); mvebu_writel(port, new, PCIE_CMD_OFF);
break; break;
case PCI_IO_BASE: case PCI_IO_BASE:
if ((mask & 0xffff) && mvebu_pcie_handle_iobase_change(port)) { if ((mask & 0xffff) && mvebu_has_ioport(port) &&
mvebu_pcie_handle_iobase_change(port)) {
/* On error disable IO range */ /* On error disable IO range */
conf->iobase &= ~0xf0; conf->iobase &= ~0xf0;
conf->iolimit &= ~0xf0; conf->iolimit &= ~0xf0;
conf->iobase |= 0xf0;
conf->iobaseupper = cpu_to_le16(0x0000); conf->iobaseupper = cpu_to_le16(0x0000);
conf->iolimitupper = cpu_to_le16(0x0000); conf->iolimitupper = cpu_to_le16(0x0000);
if (mvebu_has_ioport(port))
conf->iobase |= 0xf0;
} }
break; break;
@ -630,14 +720,14 @@ mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
break; break;
case PCI_IO_BASE_UPPER16: case PCI_IO_BASE_UPPER16:
if (mvebu_pcie_handle_iobase_change(port)) { if (mvebu_has_ioport(port) &&
mvebu_pcie_handle_iobase_change(port)) {
/* On error disable IO range */ /* On error disable IO range */
conf->iobase &= ~0xf0; conf->iobase &= ~0xf0;
conf->iolimit &= ~0xf0; conf->iolimit &= ~0xf0;
conf->iobase |= 0xf0;
conf->iobaseupper = cpu_to_le16(0x0000); conf->iobaseupper = cpu_to_le16(0x0000);
conf->iolimitupper = cpu_to_le16(0x0000); conf->iolimitupper = cpu_to_le16(0x0000);
if (mvebu_has_ioport(port))
conf->iobase |= 0xf0;
} }
break; break;
@ -675,10 +765,9 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
case PCI_EXP_LNKCTL: case PCI_EXP_LNKCTL:
/* /*
* If we don't support CLKREQ, we must ensure that the * PCIe requires that the Enable Clock Power Management bit
* CLKREQ enable bit always reads zero. Since we haven't * is hard-wired to zero for downstream ports but HW allows
* had this capability, and it's dependent on board wiring, * to change it.
* disable it for the time being.
*/ */
new &= ~PCI_EXP_LNKCTL_CLKREQ_EN; new &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
@ -709,11 +798,45 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
} }
} }
static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { static void
mvebu_pci_bridge_emul_ext_conf_write(struct pci_bridge_emul *bridge,
int reg, u32 old, u32 new, u32 mask)
{
struct mvebu_pcie_port *port = bridge->data;
switch (reg) {
/* These are W1C registers, so clear other bits */
case PCI_ERR_UNCOR_STATUS:
case PCI_ERR_COR_STATUS:
case PCI_ERR_ROOT_STATUS:
new &= mask;
fallthrough;
case PCI_ERR_UNCOR_MASK:
case PCI_ERR_UNCOR_SEVER:
case PCI_ERR_COR_MASK:
case PCI_ERR_CAP:
case PCI_ERR_HEADER_LOG+0:
case PCI_ERR_HEADER_LOG+4:
case PCI_ERR_HEADER_LOG+8:
case PCI_ERR_HEADER_LOG+12:
case PCI_ERR_ROOT_COMMAND:
case PCI_ERR_ROOT_ERR_SRC:
mvebu_writel(port, new, PCIE_CAP_PCIERR_OFF + reg);
break;
default:
break;
}
}
static const struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
.read_base = mvebu_pci_bridge_emul_base_conf_read, .read_base = mvebu_pci_bridge_emul_base_conf_read,
.write_base = mvebu_pci_bridge_emul_base_conf_write, .write_base = mvebu_pci_bridge_emul_base_conf_write,
.read_pcie = mvebu_pci_bridge_emul_pcie_conf_read, .read_pcie = mvebu_pci_bridge_emul_pcie_conf_read,
.write_pcie = mvebu_pci_bridge_emul_pcie_conf_write, .write_pcie = mvebu_pci_bridge_emul_pcie_conf_write,
.read_ext = mvebu_pci_bridge_emul_ext_conf_read,
.write_ext = mvebu_pci_bridge_emul_ext_conf_write,
}; };
/* /*
@ -722,19 +845,24 @@ static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
*/ */
static int mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port) static int mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
{ {
unsigned int bridge_flags = PCI_BRIDGE_EMUL_NO_PREFMEM_FORWARD;
struct pci_bridge_emul *bridge = &port->bridge; struct pci_bridge_emul *bridge = &port->bridge;
u32 dev_id = mvebu_readl(port, PCIE_DEV_ID_OFF);
u32 dev_rev = mvebu_readl(port, PCIE_DEV_REV_OFF);
u32 ssdev_id = mvebu_readl(port, PCIE_SSDEV_ID_OFF);
u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP); u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP);
u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS); u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS);
bridge->conf.vendor = PCI_VENDOR_ID_MARVELL; bridge->conf.vendor = cpu_to_le16(dev_id & 0xffff);
bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16; bridge->conf.device = cpu_to_le16(dev_id >> 16);
bridge->conf.class_revision = bridge->conf.class_revision = cpu_to_le32(dev_rev & 0xff);
mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff;
if (mvebu_has_ioport(port)) { if (mvebu_has_ioport(port)) {
/* We support 32 bits I/O addressing */ /* We support 32 bits I/O addressing */
bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; bridge->conf.iobase = PCI_IO_RANGE_TYPE_32;
bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
} else {
bridge_flags |= PCI_BRIDGE_EMUL_NO_IO_FORWARD;
} }
/* /*
@ -743,11 +871,13 @@ static int mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
*/ */
bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver); bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver);
bridge->subsystem_vendor_id = ssdev_id & 0xffff;
bridge->subsystem_id = ssdev_id >> 16;
bridge->has_pcie = true; bridge->has_pcie = true;
bridge->data = port; bridge->data = port;
bridge->ops = &mvebu_pci_bridge_emul_ops; bridge->ops = &mvebu_pci_bridge_emul_ops;
return pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR); return pci_bridge_emul_init(bridge, bridge_flags);
} }
static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
@ -784,25 +914,12 @@ static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
{ {
struct mvebu_pcie *pcie = bus->sysdata; struct mvebu_pcie *pcie = bus->sysdata;
struct mvebu_pcie_port *port; struct mvebu_pcie_port *port;
int ret;
port = mvebu_pcie_find_port(pcie, bus, devfn); port = mvebu_pcie_find_port(pcie, bus, devfn);
if (!port) if (!port)
return PCIBIOS_DEVICE_NOT_FOUND; return PCIBIOS_DEVICE_NOT_FOUND;
/* Access the emulated PCI-to-PCI bridge */ return pci_bridge_emul_conf_write(&port->bridge, where, size, val);
if (bus->number == 0)
return pci_bridge_emul_conf_write(&port->bridge, where,
size, val);
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
/* Access the real PCIe interface */
ret = mvebu_pcie_hw_wr_conf(port, bus, devfn,
where, size, val);
return ret;
} }
/* PCI configuration space read function */ /* PCI configuration space read function */
@ -811,25 +928,12 @@ static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
{ {
struct mvebu_pcie *pcie = bus->sysdata; struct mvebu_pcie *pcie = bus->sysdata;
struct mvebu_pcie_port *port; struct mvebu_pcie_port *port;
int ret;
port = mvebu_pcie_find_port(pcie, bus, devfn); port = mvebu_pcie_find_port(pcie, bus, devfn);
if (!port) if (!port)
return PCIBIOS_DEVICE_NOT_FOUND; return PCIBIOS_DEVICE_NOT_FOUND;
/* Access the emulated PCI-to-PCI bridge */ return pci_bridge_emul_conf_read(&port->bridge, where, size, val);
if (bus->number == 0)
return pci_bridge_emul_conf_read(&port->bridge, where,
size, val);
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
/* Access the real PCIe interface */
ret = mvebu_pcie_hw_rd_conf(port, bus, devfn,
where, size, val);
return ret;
} }
static struct pci_ops mvebu_pcie_ops = { static struct pci_ops mvebu_pcie_ops = {
@ -837,6 +941,108 @@ static struct pci_ops mvebu_pcie_ops = {
.write = mvebu_pcie_wr_conf, .write = mvebu_pcie_wr_conf,
}; };
static void mvebu_pcie_intx_irq_mask(struct irq_data *d)
{
struct mvebu_pcie_port *port = d->domain->host_data;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
unsigned long flags;
u32 unmask;
raw_spin_lock_irqsave(&port->irq_lock, flags);
unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
unmask &= ~PCIE_INT_INTX(hwirq);
mvebu_writel(port, unmask, PCIE_INT_UNMASK_OFF);
raw_spin_unlock_irqrestore(&port->irq_lock, flags);
}
static void mvebu_pcie_intx_irq_unmask(struct irq_data *d)
{
struct mvebu_pcie_port *port = d->domain->host_data;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
unsigned long flags;
u32 unmask;
raw_spin_lock_irqsave(&port->irq_lock, flags);
unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
unmask |= PCIE_INT_INTX(hwirq);
mvebu_writel(port, unmask, PCIE_INT_UNMASK_OFF);
raw_spin_unlock_irqrestore(&port->irq_lock, flags);
}
static struct irq_chip intx_irq_chip = {
.name = "mvebu-INTx",
.irq_mask = mvebu_pcie_intx_irq_mask,
.irq_unmask = mvebu_pcie_intx_irq_unmask,
};
static int mvebu_pcie_intx_irq_map(struct irq_domain *h,
unsigned int virq, irq_hw_number_t hwirq)
{
struct mvebu_pcie_port *port = h->host_data;
irq_set_status_flags(virq, IRQ_LEVEL);
irq_set_chip_and_handler(virq, &intx_irq_chip, handle_level_irq);
irq_set_chip_data(virq, port);
return 0;
}
static const struct irq_domain_ops mvebu_pcie_intx_irq_domain_ops = {
.map = mvebu_pcie_intx_irq_map,
.xlate = irq_domain_xlate_onecell,
};
static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
{
struct device *dev = &port->pcie->pdev->dev;
struct device_node *pcie_intc_node;
raw_spin_lock_init(&port->irq_lock);
pcie_intc_node = of_get_next_child(port->dn, NULL);
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found for %s\n", port->name);
return -ENODEV;
}
port->intx_irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&mvebu_pcie_intx_irq_domain_ops,
port);
of_node_put(pcie_intc_node);
if (!port->intx_irq_domain) {
dev_err(dev, "Failed to get INTx IRQ domain for %s\n", port->name);
return -ENOMEM;
}
return 0;
}
static void mvebu_pcie_irq_handler(struct irq_desc *desc)
{
struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct device *dev = &port->pcie->pdev->dev;
u32 cause, unmask, status;
int i;
chained_irq_enter(chip, desc);
cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
status = cause & unmask;
/* Process legacy INTx interrupts */
for (i = 0; i < PCI_NUM_INTX; i++) {
if (!(status & PCIE_INT_INTX(i)))
continue;
if (generic_handle_domain_irq(port->intx_irq_domain, i) == -EINVAL)
dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
}
chained_irq_exit(chip, desc);
}
static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
/* Interrupt support on mvebu emulated bridges is not implemented yet */ /* Interrupt support on mvebu emulated bridges is not implemented yet */
@ -986,6 +1192,7 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
struct device *dev = &pcie->pdev->dev; struct device *dev = &pcie->pdev->dev;
enum of_gpio_flags flags; enum of_gpio_flags flags;
int reset_gpio, ret; int reset_gpio, ret;
u32 num_lanes;
port->pcie = pcie; port->pcie = pcie;
@ -998,6 +1205,9 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
if (of_property_read_u32(child, "marvell,pcie-lane", &port->lane)) if (of_property_read_u32(child, "marvell,pcie-lane", &port->lane))
port->lane = 0; port->lane = 0;
if (!of_property_read_u32(child, "num-lanes", &num_lanes) && num_lanes == 4)
port->is_x4 = true;
port->name = devm_kasprintf(dev, GFP_KERNEL, "pcie%d.%d", port->port, port->name = devm_kasprintf(dev, GFP_KERNEL, "pcie%d.%d", port->port,
port->lane); port->lane);
if (!port->name) { if (!port->name) {
@ -1030,6 +1240,21 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
port->io_attr = -1; port->io_attr = -1;
} }
/*
* Old DT bindings do not contain "intx" interrupt
* so do not fail probing driver when interrupt does not exist.
*/
port->intx_irq = of_irq_get_byname(child, "intx");
if (port->intx_irq == -EPROBE_DEFER) {
ret = port->intx_irq;
goto err;
}
if (port->intx_irq <= 0) {
dev_warn(dev, "%s: legacy INTx interrupts cannot be masked individually, "
"%pOF does not contain intx interrupt\n",
port->name, child);
}
reset_gpio = of_get_named_gpio_flags(child, "reset-gpios", 0, &flags); reset_gpio = of_get_named_gpio_flags(child, "reset-gpios", 0, &flags);
if (reset_gpio == -EPROBE_DEFER) { if (reset_gpio == -EPROBE_DEFER) {
ret = reset_gpio; ret = reset_gpio;
@ -1226,6 +1451,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
for (i = 0; i < pcie->nports; i++) { for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = &pcie->ports[i]; struct mvebu_pcie_port *port = &pcie->ports[i];
int irq = port->intx_irq;
child = port->dn; child = port->dn;
if (!child) if (!child)
@ -1253,6 +1479,22 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
continue; continue;
} }
if (irq > 0) {
ret = mvebu_pcie_init_irq_domain(port);
if (ret) {
dev_err(dev, "%s: cannot init irq domain\n",
port->name);
pci_bridge_emul_cleanup(&port->bridge);
devm_iounmap(dev, port->base);
port->base = NULL;
mvebu_pcie_powerdown(port);
continue;
}
irq_set_chained_handler_and_data(irq,
mvebu_pcie_irq_handler,
port);
}
/* /*
* PCIe topology exported by mvebu hw is quite complicated. In * PCIe topology exported by mvebu hw is quite complicated. In
* reality has something like N fully independent host bridges * reality has something like N fully independent host bridges
@ -1333,10 +1575,9 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
mvebu_pcie_set_local_bus_nr(port, 0); mvebu_pcie_set_local_bus_nr(port, 0);
} }
pcie->nports = i;
bridge->sysdata = pcie; bridge->sysdata = pcie;
bridge->ops = &mvebu_pcie_ops; bridge->ops = &mvebu_pcie_ops;
bridge->child_ops = &mvebu_pcie_child_ops;
bridge->align_resource = mvebu_pcie_align_resource; bridge->align_resource = mvebu_pcie_align_resource;
bridge->map_irq = mvebu_pcie_map_irq; bridge->map_irq = mvebu_pcie_map_irq;
@ -1358,6 +1599,7 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
for (i = 0; i < pcie->nports; i++) { for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = &pcie->ports[i]; struct mvebu_pcie_port *port = &pcie->ports[i];
int irq = port->intx_irq;
if (!port->base) if (!port->base)
continue; continue;
@ -1368,7 +1610,17 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
mvebu_writel(port, cmd, PCIE_CMD_OFF); mvebu_writel(port, cmd, PCIE_CMD_OFF);
/* Mask all interrupt sources. */ /* Mask all interrupt sources. */
mvebu_writel(port, 0, PCIE_MASK_OFF); mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_UNMASK_OFF);
/* Clear all interrupt causes. */
mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
if (irq > 0)
irq_set_chained_handler_and_data(irq, NULL, NULL);
/* Remove IRQ domains. */
if (port->intx_irq_domain)
irq_domain_remove(port->intx_irq_domain);
/* Free config space for emulated root bridge. */ /* Free config space for emulated root bridge. */
pci_bridge_emul_cleanup(&port->bridge); pci_bridge_emul_cleanup(&port->bridge);

View File

@ -726,7 +726,7 @@ static void tegra_pcie_port_free(struct tegra_pcie_port *port)
/* Tegra PCIE root complex wrongly reports device class */ /* Tegra PCIE root complex wrongly reports device class */
static void tegra_pcie_fixup_class(struct pci_dev *dev) static void tegra_pcie_fixup_class(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0, tegra_pcie_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0, tegra_pcie_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1, tegra_pcie_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1, tegra_pcie_fixup_class);

View File

@ -49,7 +49,6 @@
#define EN_REG 0x00000001 #define EN_REG 0x00000001
#define OB_LO_IO 0x00000002 #define OB_LO_IO 0x00000002
#define XGENE_PCIE_DEVICEID 0xE004 #define XGENE_PCIE_DEVICEID 0xE004
#define SZ_1T (SZ_1G*1024ULL)
#define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe)
#define XGENE_V1_PCI_EXP_CAP 0x40 #define XGENE_V1_PCI_EXP_CAP 0x40
@ -465,7 +464,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
return 1; return 1;
} }
if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) { if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
*ib_reg_mask |= (1 << 0); *ib_reg_mask |= (1 << 0);
return 0; return 0;
} }
@ -479,28 +478,27 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
} }
static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port, static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port,
struct resource_entry *entry, struct of_pci_range *range, u8 *ib_reg_mask)
u8 *ib_reg_mask)
{ {
void __iomem *cfg_base = port->cfg_base; void __iomem *cfg_base = port->cfg_base;
struct device *dev = port->dev; struct device *dev = port->dev;
void __iomem *bar_addr; void __iomem *bar_addr;
u32 pim_reg; u32 pim_reg;
u64 cpu_addr = entry->res->start; u64 cpu_addr = range->cpu_addr;
u64 pci_addr = cpu_addr - entry->offset; u64 pci_addr = range->pci_addr;
u64 size = resource_size(entry->res); u64 size = range->size;
u64 mask = ~(size - 1) | EN_REG; u64 mask = ~(size - 1) | EN_REG;
u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64; u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64;
u32 bar_low; u32 bar_low;
int region; int region;
region = xgene_pcie_select_ib_reg(ib_reg_mask, size); region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size);
if (region < 0) { if (region < 0) {
dev_warn(dev, "invalid pcie dma-range config\n"); dev_warn(dev, "invalid pcie dma-range config\n");
return; return;
} }
if (entry->res->flags & IORESOURCE_PREFETCH) if (range->flags & IORESOURCE_PREFETCH)
flags |= PCI_BASE_ADDRESS_MEM_PREFETCH; flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
bar_low = pcie_bar_low_val((u32)cpu_addr, flags); bar_low = pcie_bar_low_val((u32)cpu_addr, flags);
@ -531,13 +529,25 @@ static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port,
static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie *port) static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie *port)
{ {
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port); struct device_node *np = port->node;
struct resource_entry *entry; struct of_pci_range range;
struct of_pci_range_parser parser;
struct device *dev = port->dev;
u8 ib_reg_mask = 0; u8 ib_reg_mask = 0;
resource_list_for_each_entry(entry, &bridge->dma_ranges) if (of_pci_dma_range_parser_init(&parser, np)) {
xgene_pcie_setup_ib_reg(port, entry, &ib_reg_mask); dev_err(dev, "missing dma-ranges property\n");
return -EINVAL;
}
/* Get the dma-ranges from DT */
for_each_of_pci_range(&parser, &range) {
u64 end = range.cpu_addr + range.size - 1;
dev_dbg(dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n",
range.flags, range.cpu_addr, end, range.pci_addr);
xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask);
}
return 0; return 0;
} }

View File

@ -18,7 +18,7 @@
/* NS: CLASS field is R/O, and set to wrong 0x200 value */ /* NS: CLASS field is R/O, and set to wrong 0x200 value */
static void bcma_pcie2_fixup_class(struct pci_dev *dev) static void bcma_pcie2_fixup_class(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8011, bcma_pcie2_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8011, bcma_pcie2_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8012, bcma_pcie2_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8012, bcma_pcie2_fixup_class);

View File

@ -789,14 +789,13 @@ static int iproc_pcie_check_link(struct iproc_pcie *pcie)
return -EFAULT; return -EFAULT;
} }
/* force class to PCI_CLASS_BRIDGE_PCI (0x0604) */ /* force class to PCI_CLASS_BRIDGE_PCI_NORMAL (0x060400) */
#define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c #define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c
#define PCI_CLASS_BRIDGE_MASK 0xffff00 #define PCI_BRIDGE_CTRL_REG_CLASS_MASK 0xffffff
#define PCI_CLASS_BRIDGE_SHIFT 8
iproc_pci_raw_config_read32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, iproc_pci_raw_config_read32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET,
4, &class); 4, &class);
class &= ~PCI_CLASS_BRIDGE_MASK; class &= ~PCI_BRIDGE_CTRL_REG_CLASS_MASK;
class |= (PCI_CLASS_BRIDGE_PCI << PCI_CLASS_BRIDGE_SHIFT); class |= PCI_CLASS_BRIDGE_PCI_NORMAL;
iproc_pci_raw_config_write32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, iproc_pci_raw_config_write32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET,
4, class); 4, class);
@ -1581,7 +1580,7 @@ static void quirk_paxc_bridge(struct pci_dev *pdev)
* code that the bridge is not an Ethernet device. * code that the bridge is not an Ethernet device.
*/ */
if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE)
pdev->class = PCI_CLASS_BRIDGE_PCI << 8; pdev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
/* /*
* MPSS is not being set properly (as it is currently 0). This is * MPSS is not being set properly (as it is currently 0). This is

View File

@ -292,7 +292,7 @@ static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
/* Set class code */ /* Set class code */
val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1); val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1);
val &= ~GENMASK(31, 8); val &= ~GENMASK(31, 8);
val |= PCI_CLASS(PCI_CLASS_BRIDGE_PCI << 8); val |= PCI_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL);
writel_relaxed(val, pcie->base + PCIE_PCI_IDS_1); writel_relaxed(val, pcie->base + PCIE_PCI_IDS_1);
/* Mask all INTx interrupts */ /* Mask all INTx interrupts */

View File

@ -65,6 +65,42 @@ struct rcar_pcie_host {
int (*phy_init_fn)(struct rcar_pcie_host *host); int (*phy_init_fn)(struct rcar_pcie_host *host);
}; };
static DEFINE_SPINLOCK(pmsr_lock);
static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base)
{
unsigned long flags;
u32 pmsr, val;
int ret = 0;
spin_lock_irqsave(&pmsr_lock, flags);
if (!pcie_base || pm_runtime_suspended(pcie_dev)) {
ret = -EINVAL;
goto unlock_exit;
}
pmsr = readl(pcie_base + PMSR);
/*
* Test if the PCIe controller received PM_ENTER_L1 DLLP and
* the PCIe controller is not in L1 link state. If true, apply
* fix, which will put the controller into L1 link state, from
* which it can return to L0s/L0 on its own.
*/
if ((pmsr & PMEL1RX) && ((pmsr & PMSTATE) != PMSTATE_L1)) {
writel(L1IATN, pcie_base + PMCTLR);
ret = readl_poll_timeout_atomic(pcie_base + PMSR, val,
val & L1FAEG, 10, 1000);
WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret);
writel(L1FAEG | PMEL1RX, pcie_base + PMSR);
}
unlock_exit:
spin_unlock_irqrestore(&pmsr_lock, flags);
return ret;
}
static struct rcar_pcie_host *msi_to_host(struct rcar_msi *msi) static struct rcar_pcie_host *msi_to_host(struct rcar_msi *msi)
{ {
return container_of(msi, struct rcar_pcie_host, msi); return container_of(msi, struct rcar_pcie_host, msi);
@ -78,6 +114,54 @@ static u32 rcar_read_conf(struct rcar_pcie *pcie, int where)
return val >> shift; return val >> shift;
} }
#ifdef CONFIG_ARM
#define __rcar_pci_rw_reg_workaround(instr) \
" .arch armv7-a\n" \
"1: " instr " %1, [%2]\n" \
"2: isb\n" \
"3: .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \
"4: mov %0, #" __stringify(PCIBIOS_SET_FAILED) "\n" \
" b 3b\n" \
" .popsection\n" \
" .pushsection __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 4b\n" \
" .long 2b, 4b\n" \
" .popsection\n"
#endif
static int rcar_pci_write_reg_workaround(struct rcar_pcie *pcie, u32 val,
unsigned int reg)
{
int error = PCIBIOS_SUCCESSFUL;
#ifdef CONFIG_ARM
asm volatile(
__rcar_pci_rw_reg_workaround("str")
: "+r"(error):"r"(val), "r"(pcie->base + reg) : "memory");
#else
rcar_pci_write_reg(pcie, val, reg);
#endif
return error;
}
static int rcar_pci_read_reg_workaround(struct rcar_pcie *pcie, u32 *val,
unsigned int reg)
{
int error = PCIBIOS_SUCCESSFUL;
#ifdef CONFIG_ARM
asm volatile(
__rcar_pci_rw_reg_workaround("ldr")
: "+r"(error), "=r"(*val) : "r"(pcie->base + reg) : "memory");
if (error != PCIBIOS_SUCCESSFUL)
PCI_SET_ERROR_RESPONSE(val);
#else
*val = rcar_pci_read_reg(pcie, reg);
#endif
return error;
}
/* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */
static int rcar_pcie_config_access(struct rcar_pcie_host *host, static int rcar_pcie_config_access(struct rcar_pcie_host *host,
unsigned char access_type, struct pci_bus *bus, unsigned char access_type, struct pci_bus *bus,
@ -85,6 +169,14 @@ static int rcar_pcie_config_access(struct rcar_pcie_host *host,
{ {
struct rcar_pcie *pcie = &host->pcie; struct rcar_pcie *pcie = &host->pcie;
unsigned int dev, func, reg, index; unsigned int dev, func, reg, index;
int ret;
/* Wake the bus up in case it is in L1 state. */
ret = rcar_pcie_wakeup(pcie->dev, pcie->base);
if (ret) {
PCI_SET_ERROR_RESPONSE(data);
return PCIBIOS_SET_FAILED;
}
dev = PCI_SLOT(devfn); dev = PCI_SLOT(devfn);
func = PCI_FUNC(devfn); func = PCI_FUNC(devfn);
@ -141,14 +233,14 @@ static int rcar_pcie_config_access(struct rcar_pcie_host *host,
return PCIBIOS_DEVICE_NOT_FOUND; return PCIBIOS_DEVICE_NOT_FOUND;
if (access_type == RCAR_PCI_ACCESS_READ) if (access_type == RCAR_PCI_ACCESS_READ)
*data = rcar_pci_read_reg(pcie, PCIECDR); ret = rcar_pci_read_reg_workaround(pcie, data, PCIECDR);
else else
rcar_pci_write_reg(pcie, *data, PCIECDR); ret = rcar_pci_write_reg_workaround(pcie, *data, PCIECDR);
/* Disable the configuration access */ /* Disable the configuration access */
rcar_pci_write_reg(pcie, 0, PCIECCTLR); rcar_pci_write_reg(pcie, 0, PCIECCTLR);
return PCIBIOS_SUCCESSFUL; return ret;
} }
static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn,
@ -370,7 +462,7 @@ static int rcar_pcie_hw_init(struct rcar_pcie *pcie)
* class to match. Hardware takes care of propagating the IDSETR * class to match. Hardware takes care of propagating the IDSETR
* settings, so there is no need to bother with a quirk. * settings, so there is no need to bother with a quirk.
*/ */
rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI_NORMAL << 8, IDSETR1);
/* /*
* Setup Secondary Bus Number & Subordinate Bus Number, even though * Setup Secondary Bus Number & Subordinate Bus Number, even though
@ -1050,40 +1142,10 @@ static struct platform_driver rcar_pcie_driver = {
}; };
#ifdef CONFIG_ARM #ifdef CONFIG_ARM
static DEFINE_SPINLOCK(pmsr_lock);
static int rcar_pcie_aarch32_abort_handler(unsigned long addr, static int rcar_pcie_aarch32_abort_handler(unsigned long addr,
unsigned int fsr, struct pt_regs *regs) unsigned int fsr, struct pt_regs *regs)
{ {
unsigned long flags; return !fixup_exception(regs);
u32 pmsr, val;
int ret = 0;
spin_lock_irqsave(&pmsr_lock, flags);
if (!pcie_base || pm_runtime_suspended(pcie_dev)) {
ret = 1;
goto unlock_exit;
}
pmsr = readl(pcie_base + PMSR);
/*
* Test if the PCIe controller received PM_ENTER_L1 DLLP and
* the PCIe controller is not in L1 link state. If true, apply
* fix, which will put the controller into L1 link state, from
* which it can return to L0s/L0 on its own.
*/
if ((pmsr & PMEL1RX) && ((pmsr & PMSTATE) != PMSTATE_L1)) {
writel(L1IATN, pcie_base + PMCTLR);
ret = readl_poll_timeout_atomic(pcie_base + PMSR, val,
val & L1FAEG, 10, 1000);
WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret);
writel(L1FAEG | PMEL1RX, pcie_base + PMSR);
}
unlock_exit:
spin_unlock_irqrestore(&pmsr_lock, flags);
return ret;
} }
static const struct of_device_id rcar_pcie_abort_handler_of_match[] __initconst = { static const struct of_device_id rcar_pcie_abort_handler_of_match[] __initconst = {

View File

@ -370,7 +370,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
PCIE_CORE_CONFIG_VENDOR); PCIE_CORE_CONFIG_VENDOR);
rockchip_pcie_write(rockchip, rockchip_pcie_write(rockchip,
PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT, PCI_CLASS_BRIDGE_PCI_NORMAL << 8,
PCIE_RC_CONFIG_RID_CCR); PCIE_RC_CONFIG_RID_CCR);
/* Clear THP cap's next cap pointer to remove L1 substate cap */ /* Clear THP cap's next cap pointer to remove L1 substate cap */

View File

@ -134,7 +134,6 @@
#define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000
#define PCIE_RC_CONFIG_BASE 0xa00000 #define PCIE_RC_CONFIG_BASE 0xa00000
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
#define PCIE_RC_CONFIG_SCC_SHIFT 16
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
#define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff #define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff

View File

@ -285,7 +285,17 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
if (ret) if (ret)
dev_err(dev, "Data transfer failed\n"); dev_err(dev, "Data transfer failed\n");
} else { } else {
memcpy(dst_addr, src_addr, reg->size); void *buf;
buf = kzalloc(reg->size, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto err_map_addr;
}
memcpy_fromio(buf, src_addr, reg->size);
memcpy_toio(dst_addr, buf, reg->size);
kfree(buf);
} }
ktime_get_ts64(&end); ktime_get_ts64(&end);
pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma); pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
@ -441,7 +451,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
if (!epf_test->dma_supported) { if (!epf_test->dma_supported) {
dev_err(dev, "Cannot transfer data using DMA\n"); dev_err(dev, "Cannot transfer data using DMA\n");
ret = -EINVAL; ret = -EINVAL;
goto err_map_addr; goto err_dma_map;
} }
src_phys_addr = dma_map_single(dma_dev, buf, reg->size, src_phys_addr = dma_map_single(dma_dev, buf, reg->size,

View File

@ -226,9 +226,9 @@ static void acpiphp_post_dock_fixup(struct acpi_device *adev)
static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data, static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data,
void **rv) void **rv)
{ {
struct acpi_device *adev = acpi_fetch_acpi_dev(handle);
struct acpiphp_bridge *bridge = data; struct acpiphp_bridge *bridge = data;
struct acpiphp_context *context; struct acpiphp_context *context;
struct acpi_device *adev;
struct acpiphp_slot *slot; struct acpiphp_slot *slot;
struct acpiphp_func *newfunc; struct acpiphp_func *newfunc;
acpi_status status = AE_OK; acpi_status status = AE_OK;
@ -238,6 +238,9 @@ static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data,
struct pci_dev *pdev = bridge->pci_dev; struct pci_dev *pdev = bridge->pci_dev;
u32 val; u32 val;
if (!adev)
return AE_OK;
status = acpi_evaluate_integer(handle, "_ADR", NULL, &adr); status = acpi_evaluate_integer(handle, "_ADR", NULL, &adr);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
if (status != AE_NOT_FOUND) if (status != AE_NOT_FOUND)
@ -245,8 +248,6 @@ static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data,
"can't evaluate _ADR (%#x)\n", status); "can't evaluate _ADR (%#x)\n", status);
return AE_OK; return AE_OK;
} }
if (acpi_bus_get_device(handle, &adev))
return AE_OK;
device = (adr >> 16) & 0xffff; device = (adr >> 16) & 0xffff;
function = adr & 0xffff; function = adr & 0xffff;

View File

@ -433,8 +433,9 @@ static int __init ibm_acpiphp_init(void)
goto init_return; goto init_return;
} }
pr_debug("%s: found IBM aPCI device\n", __func__); pr_debug("%s: found IBM aPCI device\n", __func__);
if (acpi_bus_get_device(ibm_acpi_handle, &device)) { device = acpi_fetch_acpi_dev(ibm_acpi_handle);
pr_err("%s: acpi_bus_get_device failed\n", __func__); if (!device) {
pr_err("%s: acpi_fetch_acpi_dev failed\n", __func__);
retval = -ENODEV; retval = -ENODEV;
goto init_return; goto init_return;
} }

View File

@ -1254,7 +1254,7 @@ static void __exit unload_cpqphpd(void)
struct pci_resource *res; struct pci_resource *res;
struct pci_resource *tres; struct pci_resource *tres;
rc = compaq_nvram_store(cpqhp_rom_start); compaq_nvram_store(cpqhp_rom_start);
ctrl = cpqhp_ctrl_list; ctrl = cpqhp_ctrl_list;

View File

@ -881,7 +881,6 @@ irqreturn_t cpqhp_ctrl_intr(int IRQ, void *data)
u8 reset; u8 reset;
u16 misc; u16 misc;
u32 Diff; u32 Diff;
u32 temp_dword;
misc = readw(ctrl->hpc_reg + MISC); misc = readw(ctrl->hpc_reg + MISC);
@ -917,7 +916,7 @@ irqreturn_t cpqhp_ctrl_intr(int IRQ, void *data)
writel(Diff, ctrl->hpc_reg + INT_INPUT_CLEAR); writel(Diff, ctrl->hpc_reg + INT_INPUT_CLEAR);
/* Read it back to clear any posted writes */ /* Read it back to clear any posted writes */
temp_dword = readl(ctrl->hpc_reg + INT_INPUT_CLEAR); readl(ctrl->hpc_reg + INT_INPUT_CLEAR);
if (!Diff) if (!Diff)
/* Clear all interrupts */ /* Clear all interrupts */
@ -1412,7 +1411,6 @@ static u32 board_added(struct pci_func *func, struct controller *ctrl)
u32 rc = 0; u32 rc = 0;
struct pci_func *new_slot = NULL; struct pci_func *new_slot = NULL;
struct pci_bus *bus = ctrl->pci_bus; struct pci_bus *bus = ctrl->pci_bus;
struct slot *p_slot;
struct resource_lists res_lists; struct resource_lists res_lists;
hp_slot = func->device - ctrl->slot_device_offset; hp_slot = func->device - ctrl->slot_device_offset;
@ -1459,7 +1457,7 @@ static u32 board_added(struct pci_func *func, struct controller *ctrl)
if (rc) if (rc)
return rc; return rc;
p_slot = cpqhp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset); cpqhp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset);
/* turn on board and blink green LED */ /* turn on board and blink green LED */
@ -1614,7 +1612,6 @@ static u32 remove_board(struct pci_func *func, u32 replace_flag, struct controll
u8 device; u8 device;
u8 hp_slot; u8 hp_slot;
u8 temp_byte; u8 temp_byte;
u32 rc;
struct resource_lists res_lists; struct resource_lists res_lists;
struct pci_func *temp_func; struct pci_func *temp_func;
@ -1629,7 +1626,7 @@ static u32 remove_board(struct pci_func *func, u32 replace_flag, struct controll
/* When we get here, it is safe to change base address registers. /* When we get here, it is safe to change base address registers.
* We will attempt to save the base address register lengths */ * We will attempt to save the base address register lengths */
if (replace_flag || !ctrl->add_support) if (replace_flag || !ctrl->add_support)
rc = cpqhp_save_base_addr_length(ctrl, func); cpqhp_save_base_addr_length(ctrl, func);
else if (!func->bus_head && !func->mem_head && else if (!func->bus_head && !func->mem_head &&
!func->p_mem_head && !func->io_head) { !func->p_mem_head && !func->io_head) {
/* Here we check to see if we've saved any of the board's /* Here we check to see if we've saved any of the board's
@ -1647,7 +1644,7 @@ static u32 remove_board(struct pci_func *func, u32 replace_flag, struct controll
} }
if (!skip) if (!skip)
rc = cpqhp_save_used_resources(ctrl, func); cpqhp_save_used_resources(ctrl, func);
} }
/* Change status to shutdown */ /* Change status to shutdown */
if (func->is_a_board) if (func->is_a_board)
@ -1767,7 +1764,7 @@ void cpqhp_event_stop_thread(void)
static void interrupt_event_handler(struct controller *ctrl) static void interrupt_event_handler(struct controller *ctrl)
{ {
int loop = 0; int loop;
int change = 1; int change = 1;
struct pci_func *func; struct pci_func *func;
u8 hp_slot; u8 hp_slot;
@ -1885,7 +1882,6 @@ static void interrupt_event_handler(struct controller *ctrl)
void cpqhp_pushbutton_thread(struct timer_list *t) void cpqhp_pushbutton_thread(struct timer_list *t)
{ {
u8 hp_slot; u8 hp_slot;
u8 device;
struct pci_func *func; struct pci_func *func;
struct slot *p_slot = from_timer(p_slot, t, task_event); struct slot *p_slot = from_timer(p_slot, t, task_event);
struct controller *ctrl = (struct controller *) p_slot->ctrl; struct controller *ctrl = (struct controller *) p_slot->ctrl;
@ -1893,8 +1889,6 @@ void cpqhp_pushbutton_thread(struct timer_list *t)
pushbutton_pending = NULL; pushbutton_pending = NULL;
hp_slot = p_slot->hp_slot; hp_slot = p_slot->hp_slot;
device = p_slot->device;
if (is_slot_enabled(ctrl, hp_slot)) { if (is_slot_enabled(ctrl, hp_slot)) {
p_slot->state = POWEROFF_STATE; p_slot->state = POWEROFF_STATE;
/* power Down board */ /* power Down board */
@ -1951,15 +1945,12 @@ int cpqhp_process_SI(struct controller *ctrl, struct pci_func *func)
u32 tempdword; u32 tempdword;
int rc; int rc;
struct slot *p_slot; struct slot *p_slot;
int physical_slot = 0;
tempdword = 0; tempdword = 0;
device = func->device; device = func->device;
hp_slot = device - ctrl->slot_device_offset; hp_slot = device - ctrl->slot_device_offset;
p_slot = cpqhp_find_slot(ctrl, device); p_slot = cpqhp_find_slot(ctrl, device);
if (p_slot)
physical_slot = p_slot->number;
/* Check to see if the interlock is closed */ /* Check to see if the interlock is closed */
tempdword = readl(ctrl->hpc_reg + INT_INPUT_CLEAR); tempdword = readl(ctrl->hpc_reg + INT_INPUT_CLEAR);
@ -2043,13 +2034,10 @@ int cpqhp_process_SS(struct controller *ctrl, struct pci_func *func)
unsigned int devfn; unsigned int devfn;
struct slot *p_slot; struct slot *p_slot;
struct pci_bus *pci_bus = ctrl->pci_bus; struct pci_bus *pci_bus = ctrl->pci_bus;
int physical_slot = 0;
device = func->device; device = func->device;
func = cpqhp_slot_find(ctrl->bus, device, index++); func = cpqhp_slot_find(ctrl->bus, device, index++);
p_slot = cpqhp_find_slot(ctrl, device); p_slot = cpqhp_find_slot(ctrl, device);
if (p_slot)
physical_slot = p_slot->number;
/* Make sure there are no video controllers here */ /* Make sure there are no video controllers here */
while (func && !rc) { while (func && !rc) {

View File

@ -473,7 +473,7 @@ int cpqhp_save_slot_config(struct controller *ctrl, struct pci_func *new_slot)
int sub_bus; int sub_bus;
int max_functions; int max_functions;
int function = 0; int function = 0;
int cloop = 0; int cloop;
int stop_it; int stop_it;
ID = 0xFFFFFFFF; ID = 0xFFFFFFFF;

View File

@ -325,11 +325,9 @@ static u8 i2c_ctrl_write(struct controller *ctlr_ptr, void __iomem *WPGBbar, u8
static u8 isa_ctrl_read(struct controller *ctlr_ptr, u8 offset) static u8 isa_ctrl_read(struct controller *ctlr_ptr, u8 offset)
{ {
u16 start_address; u16 start_address;
u16 end_address;
u8 data; u8 data;
start_address = ctlr_ptr->u.isa_ctlr.io_start; start_address = ctlr_ptr->u.isa_ctlr.io_start;
end_address = ctlr_ptr->u.isa_ctlr.io_end;
data = inb(start_address + offset); data = inb(start_address + offset);
return data; return data;
} }

View File

@ -1955,7 +1955,7 @@ static int __init update_bridge_ranges(struct bus_node **bus)
bus_sec = find_bus_wprev(sec_busno, NULL, 0); bus_sec = find_bus_wprev(sec_busno, NULL, 0);
/* this bus structure doesn't exist yet, PPB was configured during previous loading of ibmphp */ /* this bus structure doesn't exist yet, PPB was configured during previous loading of ibmphp */
if (!bus_sec) { if (!bus_sec) {
bus_sec = alloc_error_bus(NULL, sec_busno, 1); alloc_error_bus(NULL, sec_busno, 1);
/* the rest will be populated during NVRAM call */ /* the rest will be populated during NVRAM call */
return 0; return 0;
} }
@ -2114,6 +2114,5 @@ static int __init update_bridge_ranges(struct bus_node **bus)
} /* end for function */ } /* end for function */
} /* end for device */ } /* end for device */
bus = &bus_cur;
return 0; return 0;
} }

View File

@ -98,6 +98,8 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout)
if (slot_status & PCI_EXP_SLTSTA_CC) { if (slot_status & PCI_EXP_SLTSTA_CC) {
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_CC); PCI_EXP_SLTSTA_CC);
ctrl->cmd_busy = 0;
smp_mb();
return 1; return 1;
} }
msleep(10); msleep(10);
@ -1084,6 +1086,8 @@ static void quirk_cmd_compl(struct pci_dev *pdev)
} }
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400, DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401, DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401,

View File

@ -312,7 +312,7 @@ static void shpc_remove(struct pci_dev *dev)
} }
static const struct pci_device_id shpcd_pci_tbl[] = { static const struct pci_device_id shpcd_pci_tbl[] = {
{PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0)}, {PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL, ~0)},
{ /* end: all zeroes */ } { /* end: all zeroes */ }
}; };
MODULE_DEVICE_TABLE(pci, shpcd_pci_tbl); MODULE_DEVICE_TABLE(pci, shpcd_pci_tbl);

View File

@ -321,6 +321,7 @@ static const struct pci_p2pdma_whitelist_entry {
{PCI_VENDOR_ID_INTEL, 0x2032, 0}, {PCI_VENDOR_ID_INTEL, 0x2032, 0},
{PCI_VENDOR_ID_INTEL, 0x2033, 0}, {PCI_VENDOR_ID_INTEL, 0x2033, 0},
{PCI_VENDOR_ID_INTEL, 0x2020, 0}, {PCI_VENDOR_ID_INTEL, 0x2020, 0},
{PCI_VENDOR_ID_INTEL, 0x09a2, 0},
{} {}
}; };

View File

@ -89,9 +89,9 @@ int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
return -ENODEV; return -ENODEV;
} }
ret = acpi_bus_get_device(handle, &adev); adev = acpi_fetch_acpi_dev(handle);
if (ret) if (!adev)
return ret; return -ENODEV;
ret = acpi_get_rc_addr(adev, res); ret = acpi_get_rc_addr(adev, res);
if (ret) { if (ret) {

View File

@ -21,8 +21,11 @@
#include "pci-bridge-emul.h" #include "pci-bridge-emul.h"
#define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF #define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF
#define PCI_CAP_SSID_SIZEOF (PCI_SSVID_DEVICE_ID + 2)
#define PCI_CAP_SSID_START PCI_BRIDGE_CONF_END
#define PCI_CAP_SSID_END (PCI_CAP_SSID_START + PCI_CAP_SSID_SIZEOF)
#define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2) #define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2)
#define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END #define PCI_CAP_PCIE_START PCI_CAP_SSID_END
#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF) #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF)
/** /**
@ -315,6 +318,25 @@ struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] =
}, },
}; };
static pci_bridge_emul_read_status_t
pci_bridge_emul_read_ssid(struct pci_bridge_emul *bridge, int reg, u32 *value)
{
switch (reg) {
case PCI_CAP_LIST_ID:
*value = PCI_CAP_ID_SSVID |
(bridge->has_pcie ? (PCI_CAP_PCIE_START << 8) : 0);
return PCI_BRIDGE_EMUL_HANDLED;
case PCI_SSVID_VENDOR_ID:
*value = bridge->subsystem_vendor_id |
(bridge->subsystem_id << 16);
return PCI_BRIDGE_EMUL_HANDLED;
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
}
/* /*
* Initialize a pci_bridge_emul structure to represent a fake PCI * Initialize a pci_bridge_emul structure to represent a fake PCI
* bridge configuration space. The caller needs to have initialized * bridge configuration space. The caller needs to have initialized
@ -328,10 +350,12 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END); BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END);
/* /*
* class_revision: Class is high 24 bits and revision is low 8 bit of this member, * class_revision: Class is high 24 bits and revision is low 8 bit
* while class for PCI Bridge Normal Decode has the 24-bit value: PCI_CLASS_BRIDGE_PCI << 8 * of this member, while class for PCI Bridge Normal Decode has the
* 24-bit value: PCI_CLASS_BRIDGE_PCI_NORMAL
*/ */
bridge->conf.class_revision |= cpu_to_le32((PCI_CLASS_BRIDGE_PCI << 8) << 8); bridge->conf.class_revision |=
cpu_to_le32(PCI_CLASS_BRIDGE_PCI_NORMAL << 8);
bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
bridge->conf.cache_line_size = 0x10; bridge->conf.cache_line_size = 0x10;
bridge->conf.status = cpu_to_le16(PCI_STATUS_CAP_LIST); bridge->conf.status = cpu_to_le16(PCI_STATUS_CAP_LIST);
@ -341,9 +365,17 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
if (!bridge->pci_regs_behavior) if (!bridge->pci_regs_behavior)
return -ENOMEM; return -ENOMEM;
if (bridge->has_pcie) { if (bridge->subsystem_vendor_id)
bridge->conf.capabilities_pointer = PCI_CAP_SSID_START;
else if (bridge->has_pcie)
bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
else
bridge->conf.capabilities_pointer = 0;
if (bridge->conf.capabilities_pointer)
bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST); bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST);
if (bridge->has_pcie) {
bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4); bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4);
bridge->pcie_cap_regs_behavior = bridge->pcie_cap_regs_behavior =
@ -377,11 +409,20 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
~(BIT(10) << 16); ~(BIT(10) << 16);
} }
if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) { if (flags & PCI_BRIDGE_EMUL_NO_PREFMEM_FORWARD) {
bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0; bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0;
bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0; bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0;
} }
if (flags & PCI_BRIDGE_EMUL_NO_IO_FORWARD) {
bridge->pci_regs_behavior[PCI_COMMAND / 4].ro |= PCI_COMMAND_IO;
bridge->pci_regs_behavior[PCI_COMMAND / 4].rw &= ~PCI_COMMAND_IO;
bridge->pci_regs_behavior[PCI_IO_BASE / 4].ro |= GENMASK(15, 0);
bridge->pci_regs_behavior[PCI_IO_BASE / 4].rw &= ~GENMASK(15, 0);
bridge->pci_regs_behavior[PCI_IO_BASE_UPPER16 / 4].ro = ~0;
bridge->pci_regs_behavior[PCI_IO_BASE_UPPER16 / 4].rw = 0;
}
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(pci_bridge_emul_init); EXPORT_SYMBOL_GPL(pci_bridge_emul_init);
@ -413,25 +454,33 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
__le32 *cfgspace; __le32 *cfgspace;
const struct pci_bridge_reg_behavior *behavior; const struct pci_bridge_reg_behavior *behavior;
if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) { if (reg < PCI_BRIDGE_CONF_END) {
*value = 0; /* Emulated PCI space */
return PCIBIOS_SUCCESSFUL; read_op = bridge->ops->read_base;
} cfgspace = (__le32 *) &bridge->conf;
behavior = bridge->pci_regs_behavior;
if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) { } else if (reg >= PCI_CAP_SSID_START && reg < PCI_CAP_SSID_END && bridge->subsystem_vendor_id) {
*value = 0; /* Emulated PCI Bridge Subsystem Vendor ID capability */
return PCIBIOS_SUCCESSFUL; reg -= PCI_CAP_SSID_START;
} read_op = pci_bridge_emul_read_ssid;
cfgspace = NULL;
if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { behavior = NULL;
} else if (reg >= PCI_CAP_PCIE_START && reg < PCI_CAP_PCIE_END && bridge->has_pcie) {
/* Our emulated PCIe capability */
reg -= PCI_CAP_PCIE_START; reg -= PCI_CAP_PCIE_START;
read_op = bridge->ops->read_pcie; read_op = bridge->ops->read_pcie;
cfgspace = (__le32 *) &bridge->pcie_conf; cfgspace = (__le32 *) &bridge->pcie_conf;
behavior = bridge->pcie_cap_regs_behavior; behavior = bridge->pcie_cap_regs_behavior;
} else if (reg >= PCI_CFG_SPACE_SIZE && bridge->has_pcie) {
/* PCIe extended capability space */
reg -= PCI_CFG_SPACE_SIZE;
read_op = bridge->ops->read_ext;
cfgspace = NULL;
behavior = NULL;
} else { } else {
read_op = bridge->ops->read_base; /* Not implemented */
cfgspace = (__le32 *) &bridge->conf; *value = 0;
behavior = bridge->pci_regs_behavior; return PCIBIOS_SUCCESSFUL;
} }
if (read_op) if (read_op)
@ -439,15 +488,20 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
else else
ret = PCI_BRIDGE_EMUL_NOT_HANDLED; ret = PCI_BRIDGE_EMUL_NOT_HANDLED;
if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) {
*value = le32_to_cpu(cfgspace[reg / 4]); if (cfgspace)
*value = le32_to_cpu(cfgspace[reg / 4]);
else
*value = 0;
}
/* /*
* Make sure we never return any reserved bit with a value * Make sure we never return any reserved bit with a value
* different from 0. * different from 0.
*/ */
*value &= behavior[reg / 4].ro | behavior[reg / 4].rw | if (behavior)
behavior[reg / 4].w1c; *value &= behavior[reg / 4].ro | behavior[reg / 4].rw |
behavior[reg / 4].w1c;
if (size == 1) if (size == 1)
*value = (*value >> (8 * (where & 3))) & 0xff; *value = (*value >> (8 * (where & 3))) & 0xff;
@ -475,11 +529,31 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
__le32 *cfgspace; __le32 *cfgspace;
const struct pci_bridge_reg_behavior *behavior; const struct pci_bridge_reg_behavior *behavior;
if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old);
return PCIBIOS_SUCCESSFUL; if (ret != PCIBIOS_SUCCESSFUL)
return ret;
if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) if (reg < PCI_BRIDGE_CONF_END) {
/* Emulated PCI space */
write_op = bridge->ops->write_base;
cfgspace = (__le32 *) &bridge->conf;
behavior = bridge->pci_regs_behavior;
} else if (reg >= PCI_CAP_PCIE_START && reg < PCI_CAP_PCIE_END && bridge->has_pcie) {
/* Our emulated PCIe capability */
reg -= PCI_CAP_PCIE_START;
write_op = bridge->ops->write_pcie;
cfgspace = (__le32 *) &bridge->pcie_conf;
behavior = bridge->pcie_cap_regs_behavior;
} else if (reg >= PCI_CFG_SPACE_SIZE && bridge->has_pcie) {
/* PCIe extended capability space */
reg -= PCI_CFG_SPACE_SIZE;
write_op = bridge->ops->write_ext;
cfgspace = NULL;
behavior = NULL;
} else {
/* Not implemented */
return PCIBIOS_SUCCESSFUL; return PCIBIOS_SUCCESSFUL;
}
shift = (where & 0x3) * 8; shift = (where & 0x3) * 8;
@ -492,44 +566,38 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
else else
return PCIBIOS_BAD_REGISTER_NUMBER; return PCIBIOS_BAD_REGISTER_NUMBER;
ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old); if (behavior) {
if (ret != PCIBIOS_SUCCESSFUL) /* Keep all bits, except the RW bits */
return ret; new = old & (~mask | ~behavior[reg / 4].rw);
if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { /* Update the value of the RW bits */
reg -= PCI_CAP_PCIE_START; new |= (value << shift) & (behavior[reg / 4].rw & mask);
write_op = bridge->ops->write_pcie;
cfgspace = (__le32 *) &bridge->pcie_conf; /* Clear the W1C bits */
behavior = bridge->pcie_cap_regs_behavior; new &= ~((value << shift) & (behavior[reg / 4].w1c & mask));
} else { } else {
write_op = bridge->ops->write_base; new = old & ~mask;
cfgspace = (__le32 *) &bridge->conf; new |= (value << shift) & mask;
behavior = bridge->pci_regs_behavior;
} }
/* Keep all bits, except the RW bits */ if (cfgspace) {
new = old & (~mask | ~behavior[reg / 4].rw); /* Save the new value with the cleared W1C bits into the cfgspace */
cfgspace[reg / 4] = cpu_to_le32(new);
}
/* Update the value of the RW bits */ if (behavior) {
new |= (value << shift) & (behavior[reg / 4].rw & mask); /*
* Clear the W1C bits not specified by the write mask, so that the
* write_op() does not clear them.
*/
new &= ~(behavior[reg / 4].w1c & ~mask);
/* Clear the W1C bits */ /*
new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); * Set the W1C bits specified by the write mask, so that write_op()
* knows about that they are to be cleared.
/* Save the new value with the cleared W1C bits into the cfgspace */ */
cfgspace[reg / 4] = cpu_to_le32(new); new |= (value << shift) & (behavior[reg / 4].w1c & mask);
}
/*
* Clear the W1C bits not specified by the write mask, so that the
* write_op() does not clear them.
*/
new &= ~(behavior[reg / 4].w1c & ~mask);
/*
* Set the W1C bits specified by the write mask, so that write_op()
* knows about that they are to be cleared.
*/
new |= (value << shift) & (behavior[reg / 4].w1c & mask);
if (write_op) if (write_op)
write_op(bridge, reg, old, new, mask); write_op(bridge, reg, old, new, mask);

View File

@ -90,6 +90,14 @@ struct pci_bridge_emul_ops {
*/ */
pci_bridge_emul_read_status_t (*read_pcie)(struct pci_bridge_emul *bridge, pci_bridge_emul_read_status_t (*read_pcie)(struct pci_bridge_emul *bridge,
int reg, u32 *value); int reg, u32 *value);
/*
* Same as ->read_base(), except it is for reading from the
* PCIe extended capability configuration space.
*/
pci_bridge_emul_read_status_t (*read_ext)(struct pci_bridge_emul *bridge,
int reg, u32 *value);
/* /*
* Called when writing to the regular PCI bridge configuration * Called when writing to the regular PCI bridge configuration
* space. old is the current value, new is the new value being * space. old is the current value, new is the new value being
@ -105,6 +113,13 @@ struct pci_bridge_emul_ops {
*/ */
void (*write_pcie)(struct pci_bridge_emul *bridge, int reg, void (*write_pcie)(struct pci_bridge_emul *bridge, int reg,
u32 old, u32 new, u32 mask); u32 old, u32 new, u32 mask);
/*
* Same as ->write_base(), except it is for writing from the
* PCIe extended capability configuration space.
*/
void (*write_ext)(struct pci_bridge_emul *bridge, int reg,
u32 old, u32 new, u32 mask);
}; };
struct pci_bridge_reg_behavior; struct pci_bridge_reg_behavior;
@ -112,15 +127,27 @@ struct pci_bridge_reg_behavior;
struct pci_bridge_emul { struct pci_bridge_emul {
struct pci_bridge_emul_conf conf; struct pci_bridge_emul_conf conf;
struct pci_bridge_emul_pcie_conf pcie_conf; struct pci_bridge_emul_pcie_conf pcie_conf;
struct pci_bridge_emul_ops *ops; const struct pci_bridge_emul_ops *ops;
struct pci_bridge_reg_behavior *pci_regs_behavior; struct pci_bridge_reg_behavior *pci_regs_behavior;
struct pci_bridge_reg_behavior *pcie_cap_regs_behavior; struct pci_bridge_reg_behavior *pcie_cap_regs_behavior;
void *data; void *data;
bool has_pcie; bool has_pcie;
u16 subsystem_vendor_id;
u16 subsystem_id;
}; };
enum { enum {
PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0), /*
* PCI bridge does not support forwarding of prefetchable memory
* requests between primary and secondary buses.
*/
PCI_BRIDGE_EMUL_NO_PREFMEM_FORWARD = BIT(0),
/*
* PCI bridge does not support forwarding of IO requests between
* primary and secondary buses.
*/
PCI_BRIDGE_EMUL_NO_IO_FORWARD = BIT(1),
}; };
int pci_bridge_emul_init(struct pci_bridge_emul *bridge, int pci_bridge_emul_init(struct pci_bridge_emul *bridge,

View File

@ -754,8 +754,6 @@ static ssize_t pci_read_config(struct file *filp, struct kobject *kobj,
u8 val; u8 val;
pci_user_read_config_byte(dev, off, &val); pci_user_read_config_byte(dev, off, &val);
data[off - init_off] = val; data[off - init_off] = val;
off++;
--size;
} }
pci_config_pm_runtime_put(dev); pci_config_pm_runtime_put(dev);
@ -818,11 +816,8 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj,
size -= 2; size -= 2;
} }
if (size) { if (size)
pci_user_write_config_byte(dev, off, data[off - init_off]); pci_user_write_config_byte(dev, off, data[off - init_off]);
off++;
--size;
}
pci_config_pm_runtime_put(dev); pci_config_pm_runtime_put(dev);

View File

@ -43,7 +43,7 @@ config PCIEAER_INJECT
error injection can fake almost all kinds of errors with the error injection can fake almost all kinds of errors with the
help of a user space helper tool aer-inject, which can be help of a user space helper tool aer-inject, which can be
gotten from: gotten from:
https://www.kernel.org/pub/linux/utils/pci/aer-inject/ https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/
# #
# PCI Express ECRC # PCI Express ECRC

View File

@ -6,7 +6,7 @@
* trigger various real hardware errors. Software based error * trigger various real hardware errors. Software based error
* injection can fake almost all kinds of errors with the help of a * injection can fake almost all kinds of errors with the help of a
* user space helper tool aer-inject, which can be gotten from: * user space helper tool aer-inject, which can be gotten from:
* https://www.kernel.org/pub/linux/utils/pci/aer-inject/ * https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/
* *
* Copyright 2009 Intel Corporation. * Copyright 2009 Intel Corporation.
* Huang Ying <ying.huang@intel.com> * Huang Ying <ying.huang@intel.com>

View File

@ -178,9 +178,9 @@ static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev)
*/ */
static const struct pci_device_id port_pci_ids[] = { static const struct pci_device_id port_pci_ids[] = {
/* handle any PCI-Express port */ /* handle any PCI-Express port */
{ PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0) }, { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL, ~0) },
/* subtractive decode PCI-to-PCI bridge, class type is 060401h */ /* subtractive decode PCI-to-PCI bridge, class type is 060401h */
{ PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x01), ~0) }, { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE, ~0) },
/* handle any Root Complex Event Collector */ /* handle any Root Complex Event Collector */
{ PCI_DEVICE_CLASS(((PCI_CLASS_SYSTEM_RCEC << 8) | 0x00), ~0) }, { PCI_DEVICE_CLASS(((PCI_CLASS_SYSTEM_RCEC << 8) | 0x00), ~0) },
{ }, { },

View File

@ -99,9 +99,7 @@ static ssize_t proc_bus_pci_read(struct file *file, char __user *buf,
unsigned char val; unsigned char val;
pci_user_read_config_byte(dev, pos, &val); pci_user_read_config_byte(dev, pos, &val);
__put_user(val, buf); __put_user(val, buf);
buf++;
pos++; pos++;
cnt--;
} }
pci_config_pm_runtime_put(dev); pci_config_pm_runtime_put(dev);
@ -176,9 +174,7 @@ static ssize_t proc_bus_pci_write(struct file *file, const char __user *buf,
unsigned char val; unsigned char val;
__get_user(val, buf); __get_user(val, buf);
pci_user_write_config_byte(dev, pos, val); pci_user_write_config_byte(dev, pos, val);
buf++;
pos++; pos++;
cnt--;
} }
pci_config_pm_runtime_put(dev); pci_config_pm_runtime_put(dev);
@ -188,10 +184,12 @@ static ssize_t proc_bus_pci_write(struct file *file, const char __user *buf,
return nbytes; return nbytes;
} }
#ifdef HAVE_PCI_MMAP
struct pci_filp_private { struct pci_filp_private {
enum pci_mmap_state mmap_state; enum pci_mmap_state mmap_state;
int write_combine; int write_combine;
}; };
#endif /* HAVE_PCI_MMAP */
static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd, static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd,
unsigned long arg) unsigned long arg)

View File

@ -1811,6 +1811,18 @@ static void quirk_alder_ioapic(struct pci_dev *pdev)
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic);
#endif #endif
static void quirk_no_msi(struct pci_dev *dev)
{
pci_info(dev, "avoiding MSI to work around a hardware defect\n");
dev->no_msi = 1;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_no_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_no_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_no_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_no_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_no_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_no_msi);
static void quirk_pcie_mch(struct pci_dev *pdev) static void quirk_pcie_mch(struct pci_dev *pdev)
{ {
pdev->no_msi = 1; pdev->no_msi = 1;

View File

@ -994,7 +994,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
{ {
struct pci_dev *dev; struct pci_dev *dev;
resource_size_t min_align, align, size, size0, size1; resource_size_t min_align, align, size, size0, size1;
resource_size_t aligns[18]; /* Alignments from 1MB to 128GB */ resource_size_t aligns[24]; /* Alignments from 1MB to 8TB */
int order, max_order; int order, max_order;
struct resource *b_res = find_bus_resource_of_type(bus, struct resource *b_res = find_bus_resource_of_type(bus,
mask | IORESOURCE_PREFETCH, type); mask | IORESOURCE_PREFETCH, type);
@ -1525,7 +1525,7 @@ static void pci_bridge_release_resources(struct pci_bus *bus,
{ {
struct pci_dev *dev = bus->self; struct pci_dev *dev = bus->self;
struct resource *r; struct resource *r;
unsigned int old_flags = 0; unsigned int old_flags;
struct resource *b_res; struct resource *b_res;
int idx = 1; int idx = 1;

View File

@ -1,32 +1,11 @@
// SPDX-License-Identifier: MIT
/* /*
* vgaarb.c: Implements the VGA arbitration. For details refer to * vgaarb.c: Implements the VGA arbitration. For details refer to
* Documentation/gpu/vgaarbiter.rst * Documentation/gpu/vgaarbiter.rst
* *
*
* (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org> * (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
* (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com> * (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com>
* (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org> * (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS
* IN THE SOFTWARE.
*
*/ */
#define pr_fmt(fmt) "vgaarb: " fmt #define pr_fmt(fmt) "vgaarb: " fmt
@ -72,6 +51,7 @@ struct vga_device {
unsigned int io_norm_cnt; /* normal IO count */ unsigned int io_norm_cnt; /* normal IO count */
unsigned int mem_norm_cnt; /* normal MEM count */ unsigned int mem_norm_cnt; /* normal MEM count */
bool bridge_has_one_vga; bool bridge_has_one_vga;
bool is_firmware_default; /* device selected by firmware */
unsigned int (*set_decode)(struct pci_dev *pdev, bool decode); unsigned int (*set_decode)(struct pci_dev *pdev, bool decode);
}; };
@ -122,8 +102,6 @@ both:
/* this is only used a cookie - it should not be dereferenced */ /* this is only used a cookie - it should not be dereferenced */
static struct pci_dev *vga_default; static struct pci_dev *vga_default;
static void vga_arb_device_card_gone(struct pci_dev *pdev);
/* Find somebody in our list */ /* Find somebody in our list */
static struct vga_device *vgadev_find(struct pci_dev *pdev) static struct vga_device *vgadev_find(struct pci_dev *pdev)
{ {
@ -565,6 +543,144 @@ bail:
} }
EXPORT_SYMBOL(vga_put); EXPORT_SYMBOL(vga_put);
static bool vga_is_firmware_default(struct pci_dev *pdev)
{
#if defined(CONFIG_X86) || defined(CONFIG_IA64)
u64 base = screen_info.lfb_base;
u64 size = screen_info.lfb_size;
u64 limit;
resource_size_t start, end;
unsigned long flags;
int i;
/* Select the device owning the boot framebuffer if there is one */
if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
base |= (u64)screen_info.ext_lfb_base << 32;
limit = base + size;
/* Does firmware framebuffer belong to us? */
for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
flags = pci_resource_flags(pdev, i);
if ((flags & IORESOURCE_MEM) == 0)
continue;
start = pci_resource_start(pdev, i);
end = pci_resource_end(pdev, i);
if (!start || !end)
continue;
if (base < start || limit >= end)
continue;
return true;
}
#endif
return false;
}
static bool vga_arb_integrated_gpu(struct device *dev)
{
#if defined(CONFIG_ACPI)
struct acpi_device *adev = ACPI_COMPANION(dev);
return adev && !strcmp(acpi_device_hid(adev), ACPI_VIDEO_HID);
#else
return false;
#endif
}
/*
* Return true if vgadev is a better default VGA device than the best one
* we've seen so far.
*/
static bool vga_is_boot_device(struct vga_device *vgadev)
{
struct vga_device *boot_vga = vgadev_find(vga_default_device());
struct pci_dev *pdev = vgadev->pdev;
u16 cmd, boot_cmd;
/*
* We select the default VGA device in this order:
* Firmware framebuffer (see vga_arb_select_default_device())
* Legacy VGA device (owns VGA_RSRC_LEGACY_MASK)
* Non-legacy integrated device (see vga_arb_select_default_device())
* Non-legacy discrete device (see vga_arb_select_default_device())
* Other device (see vga_arb_select_default_device())
*/
/*
* We always prefer a firmware default device, so if we've already
* found one, there's no need to consider vgadev.
*/
if (boot_vga && boot_vga->is_firmware_default)
return false;
if (vga_is_firmware_default(pdev)) {
vgadev->is_firmware_default = true;
return true;
}
/*
* A legacy VGA device has MEM and IO enabled and any bridges
* leading to it have PCI_BRIDGE_CTL_VGA enabled so the legacy
* resources ([mem 0xa0000-0xbffff], [io 0x3b0-0x3bb], etc) are
* routed to it.
*
* We use the first one we find, so if we've already found one,
* vgadev is no better.
*/
if (boot_vga &&
(boot_vga->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)
return false;
if ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)
return true;
/*
* If we haven't found a legacy VGA device, accept a non-legacy
* device. It may have either IO or MEM enabled, and bridges may
* not have PCI_BRIDGE_CTL_VGA enabled, so it may not be able to
* use legacy VGA resources. Prefer an integrated GPU over others.
*/
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
/*
* An integrated GPU overrides a previous non-legacy
* device. We expect only a single integrated GPU, but if
* there are more, we use the *last* because that was the
* previous behavior.
*/
if (vga_arb_integrated_gpu(&pdev->dev))
return true;
/*
* We prefer the first non-legacy discrete device we find.
* If we already found one, vgadev is no better.
*/
if (boot_vga) {
pci_read_config_word(boot_vga->pdev, PCI_COMMAND,
&boot_cmd);
if (boot_cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY))
return false;
}
return true;
}
/*
* vgadev has neither IO nor MEM enabled. If we haven't found any
* other VGA devices, it is the best candidate so far.
*/
if (!boot_vga)
return true;
return false;
}
/* /*
* Rules for using a bridge to control a VGA descendant decoding: if a bridge * Rules for using a bridge to control a VGA descendant decoding: if a bridge
* has only one VGA descendant then it can be used to control the VGA routing * has only one VGA descendant then it can be used to control the VGA routing
@ -582,8 +698,10 @@ static void vga_arbiter_check_bridge_sharing(struct vga_device *vgadev)
vgadev->bridge_has_one_vga = true; vgadev->bridge_has_one_vga = true;
if (list_empty(&vga_list)) if (list_empty(&vga_list)) {
vgaarb_info(&vgadev->pdev->dev, "bridge control possible\n");
return; return;
}
/* okay iterate the new devices bridge hierarachy */ /* okay iterate the new devices bridge hierarachy */
new_bus = vgadev->pdev->bus; new_bus = vgadev->pdev->bus;
@ -622,6 +740,11 @@ static void vga_arbiter_check_bridge_sharing(struct vga_device *vgadev)
} }
new_bus = new_bus->parent; new_bus = new_bus->parent;
} }
if (vgadev->bridge_has_one_vga)
vgaarb_info(&vgadev->pdev->dev, "bridge control possible\n");
else
vgaarb_info(&vgadev->pdev->dev, "no bridge control possible\n");
} }
/* /*
@ -692,12 +815,10 @@ static bool vga_arbiter_add_pci_device(struct pci_dev *pdev)
bus = bus->parent; bus = bus->parent;
} }
/* Deal with VGA default device. Use first enabled one if (vga_is_boot_device(vgadev)) {
* by default if arch doesn't have it's own hook vgaarb_info(&pdev->dev, "setting as boot VGA device%s\n",
*/ vga_default_device() ?
if (vga_default == NULL && " (overriding previous)" : "");
((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)) {
vgaarb_info(&pdev->dev, "setting as boot VGA device\n");
vga_set_default_device(pdev); vga_set_default_device(pdev);
} }
@ -741,10 +862,6 @@ static bool vga_arbiter_del_pci_device(struct pci_dev *pdev)
/* Remove entry from list */ /* Remove entry from list */
list_del(&vgadev->list); list_del(&vgadev->list);
vga_count--; vga_count--;
/* Notify userland driver that the device is gone so it discards
* it's copies of the pci_dev pointer
*/
vga_arb_device_card_gone(pdev);
/* Wake up all possible waiters */ /* Wake up all possible waiters */
wake_up_all(&vga_wait_queue); wake_up_all(&vga_wait_queue);
@ -994,9 +1111,7 @@ static ssize_t vga_arb_read(struct file *file, char __user *buf,
if (lbuf == NULL) if (lbuf == NULL)
return -ENOMEM; return -ENOMEM;
/* Shields against vga_arb_device_card_gone (pci_dev going /* Protects vga_list */
* away), and allows access to vga list
*/
spin_lock_irqsave(&vga_lock, flags); spin_lock_irqsave(&vga_lock, flags);
/* If we are targeting the default, use it */ /* If we are targeting the default, use it */
@ -1013,8 +1128,6 @@ static ssize_t vga_arb_read(struct file *file, char __user *buf,
/* Wow, it's not in the list, that shouldn't happen, /* Wow, it's not in the list, that shouldn't happen,
* let's fix us up and return invalid card * let's fix us up and return invalid card
*/ */
if (pdev == priv->target)
vga_arb_device_card_gone(pdev);
spin_unlock_irqrestore(&vga_lock, flags); spin_unlock_irqrestore(&vga_lock, flags);
len = sprintf(lbuf, "invalid"); len = sprintf(lbuf, "invalid");
goto done; goto done;
@ -1022,7 +1135,7 @@ static ssize_t vga_arb_read(struct file *file, char __user *buf,
/* Fill the buffer with infos */ /* Fill the buffer with infos */
len = snprintf(lbuf, 1024, len = snprintf(lbuf, 1024,
"count:%d,PCI:%s,decodes=%s,owns=%s,locks=%s(%d:%d)\n", "count:%d,PCI:%s,decodes=%s,owns=%s,locks=%s(%u:%u)\n",
vga_decode_count, pci_name(pdev), vga_decode_count, pci_name(pdev),
vga_iostate_to_str(vgadev->decodes), vga_iostate_to_str(vgadev->decodes),
vga_iostate_to_str(vgadev->owns), vga_iostate_to_str(vgadev->owns),
@ -1358,10 +1471,6 @@ static int vga_arb_release(struct inode *inode, struct file *file)
return 0; return 0;
} }
static void vga_arb_device_card_gone(struct pci_dev *pdev)
{
}
/* /*
* callback any registered clients to let them know we have a * callback any registered clients to let them know we have a
* change in VGA cards * change in VGA cards
@ -1430,111 +1539,10 @@ static struct miscdevice vga_arb_device = {
MISC_DYNAMIC_MINOR, "vga_arbiter", &vga_arb_device_fops MISC_DYNAMIC_MINOR, "vga_arbiter", &vga_arb_device_fops
}; };
#if defined(CONFIG_ACPI)
static bool vga_arb_integrated_gpu(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
return adev && !strcmp(acpi_device_hid(adev), ACPI_VIDEO_HID);
}
#else
static bool vga_arb_integrated_gpu(struct device *dev)
{
return false;
}
#endif
static void __init vga_arb_select_default_device(void)
{
struct pci_dev *pdev, *found = NULL;
struct vga_device *vgadev;
#if defined(CONFIG_X86) || defined(CONFIG_IA64)
u64 base = screen_info.lfb_base;
u64 size = screen_info.lfb_size;
u64 limit;
resource_size_t start, end;
unsigned long flags;
int i;
if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
base |= (u64)screen_info.ext_lfb_base << 32;
limit = base + size;
list_for_each_entry(vgadev, &vga_list, list) {
struct device *dev = &vgadev->pdev->dev;
/*
* Override vga_arbiter_add_pci_device()'s I/O based detection
* as it may take the wrong device (e.g. on Apple system under
* EFI).
*
* Select the device owning the boot framebuffer if there is
* one.
*/
/* Does firmware framebuffer belong to us? */
for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
flags = pci_resource_flags(vgadev->pdev, i);
if ((flags & IORESOURCE_MEM) == 0)
continue;
start = pci_resource_start(vgadev->pdev, i);
end = pci_resource_end(vgadev->pdev, i);
if (!start || !end)
continue;
if (base < start || limit >= end)
continue;
if (!vga_default_device())
vgaarb_info(dev, "setting as boot device\n");
else if (vgadev->pdev != vga_default_device())
vgaarb_info(dev, "overriding boot device\n");
vga_set_default_device(vgadev->pdev);
}
}
#endif
if (!vga_default_device()) {
list_for_each_entry_reverse(vgadev, &vga_list, list) {
struct device *dev = &vgadev->pdev->dev;
u16 cmd;
pdev = vgadev->pdev;
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
found = pdev;
if (vga_arb_integrated_gpu(dev))
break;
}
}
}
if (found) {
vgaarb_info(&found->dev, "setting as boot device (VGA legacy resources not available)\n");
vga_set_default_device(found);
return;
}
if (!vga_default_device()) {
vgadev = list_first_entry_or_null(&vga_list,
struct vga_device, list);
if (vgadev) {
struct device *dev = &vgadev->pdev->dev;
vgaarb_info(dev, "setting as boot device (VGA legacy resources not available)\n");
vga_set_default_device(vgadev->pdev);
}
}
}
static int __init vga_arb_device_init(void) static int __init vga_arb_device_init(void)
{ {
int rc; int rc;
struct pci_dev *pdev; struct pci_dev *pdev;
struct vga_device *vgadev;
rc = misc_register(&vga_arb_device); rc = misc_register(&vga_arb_device);
if (rc < 0) if (rc < 0)
@ -1550,18 +1558,7 @@ static int __init vga_arb_device_init(void)
PCI_ANY_ID, pdev)) != NULL) PCI_ANY_ID, pdev)) != NULL)
vga_arbiter_add_pci_device(pdev); vga_arbiter_add_pci_device(pdev);
list_for_each_entry(vgadev, &vga_list, list) {
struct device *dev = &vgadev->pdev->dev;
if (vgadev->bridge_has_one_vga)
vgaarb_info(dev, "bridge control possible\n");
else
vgaarb_info(dev, "no bridge control possible\n");
}
vga_arb_select_default_device();
pr_info("loaded\n"); pr_info("loaded\n");
return rc; return rc;
} }
subsys_initcall(vga_arb_device_init); subsys_initcall_sync(vga_arb_device_init);

View File

@ -668,6 +668,7 @@ struct pci_bus {
struct bin_attribute *legacy_io; /* Legacy I/O for this bus */ struct bin_attribute *legacy_io; /* Legacy I/O for this bus */
struct bin_attribute *legacy_mem; /* Legacy mem */ struct bin_attribute *legacy_mem; /* Legacy mem */
unsigned int is_added:1; unsigned int is_added:1;
unsigned int unsafe_warn:1; /* warned about RW1C config write */
}; };
#define to_pci_bus(n) container_of(n, struct pci_bus, dev) #define to_pci_bus(n) container_of(n, struct pci_bus, dev)

View File

@ -60,6 +60,8 @@
#define PCI_CLASS_BRIDGE_EISA 0x0602 #define PCI_CLASS_BRIDGE_EISA 0x0602
#define PCI_CLASS_BRIDGE_MC 0x0603 #define PCI_CLASS_BRIDGE_MC 0x0603
#define PCI_CLASS_BRIDGE_PCI 0x0604 #define PCI_CLASS_BRIDGE_PCI 0x0604
#define PCI_CLASS_BRIDGE_PCI_NORMAL 0x060400
#define PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE 0x060401
#define PCI_CLASS_BRIDGE_PCMCIA 0x0605 #define PCI_CLASS_BRIDGE_PCMCIA 0x0605
#define PCI_CLASS_BRIDGE_NUBUS 0x0606 #define PCI_CLASS_BRIDGE_NUBUS 0x0606
#define PCI_CLASS_BRIDGE_CARDBUS 0x0607 #define PCI_CLASS_BRIDGE_CARDBUS 0x0607

View File

@ -47,6 +47,8 @@
#define SZ_8G _AC(0x200000000, ULL) #define SZ_8G _AC(0x200000000, ULL)
#define SZ_16G _AC(0x400000000, ULL) #define SZ_16G _AC(0x400000000, ULL)
#define SZ_32G _AC(0x800000000, ULL) #define SZ_32G _AC(0x800000000, ULL)
#define SZ_1T _AC(0x10000000000, ULL)
#define SZ_64T _AC(0x400000000000, ULL) #define SZ_64T _AC(0x400000000000, ULL)
#endif /* __LINUX_SIZES_H__ */ #endif /* __LINUX_SIZES_H__ */