mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 04:18:39 +08:00
Updates for the interrupt core and driver subsystem:
- Core: The bulk is the rework of the MSI subsystem to support per device MSI interrupt domains. This solves conceptual problems of the current PCI/MSI design which are in the way of providing support for PCI/MSI[-X] and the upcoming PCI/IMS mechanism on the same device. IMS (Interrupt Message Store] is a new specification which allows device manufactures to provide implementation defined storage for MSI messages contrary to the uniform and specification defined storage mechanisms for PCI/MSI and PCI/MSI-X. IMS not only allows to overcome the size limitations of the MSI-X table, but also gives the device manufacturer the freedom to store the message in arbitrary places, even in host memory which is shared with the device. There have been several attempts to glue this into the current MSI code, but after lengthy discussions it turned out that there is a fundamental design problem in the current PCI/MSI-X implementation. This needs some historical background. When PCI/MSI[-X] support was added around 2003, interrupt management was completely different from what we have today in the actively developed architectures. Interrupt management was completely architecture specific and while there were attempts to create common infrastructure the commonalities were rudimentary and just providing shared data structures and interfaces so that drivers could be written in an architecture agnostic way. The initial PCI/MSI[-X] support obviously plugged into this model which resulted in some basic shared infrastructure in the PCI core code for setting up MSI descriptors, which are a pure software construct for holding data relevant for a particular MSI interrupt, but the actual association to Linux interrupts was completely architecture specific. This model is still supported today to keep museum architectures and notorious stranglers alive. In 2013 Intel tried to add support for hot-pluggable IO/APICs to the kernel, which was creating yet another architecture specific mechanism and resulted in an unholy mess on top of the existing horrors of x86 interrupt handling. The x86 interrupt management code was already an incomprehensible maze of indirections between the CPU vector management, interrupt remapping and the actual IO/APIC and PCI/MSI[-X] implementation. At roughly the same time ARM struggled with the ever growing SoC specific extensions which were glued on top of the architected GIC interrupt controller. This resulted in a fundamental redesign of interrupt management and provided the today prevailing concept of hierarchical interrupt domains. This allowed to disentangle the interactions between x86 vector domain and interrupt remapping and also allowed ARM to handle the zoo of SoC specific interrupt components in a sane way. The concept of hierarchical interrupt domains aims to encapsulate the functionality of particular IP blocks which are involved in interrupt delivery so that they become extensible and pluggable. The X86 encapsulation looks like this: |--- device 1 [Vector]---[Remapping]---[PCI/MSI]--|... |--- device N where the remapping domain is an optional component and in case that it is not available the PCI/MSI[-X] domains have the vector domain as their parent. This reduced the required interaction between the domains pretty much to the initialization phase where it is obviously required to establish the proper parent relation ship in the components of the hierarchy. While in most cases the model is strictly representing the chain of IP blocks and abstracting them so they can be plugged together to form a hierarchy, the design stopped short on PCI/MSI[-X]. Looking at the hardware it's clear that the actual PCI/MSI[-X] interrupt controller is not a global entity, but strict a per PCI device entity. Here we took a short cut on the hierarchical model and went for the easy solution of providing "global" PCI/MSI domains which was possible because the PCI/MSI[-X] handling is uniform across the devices. This also allowed to keep the existing PCI/MSI[-X] infrastructure mostly unchanged which in turn made it simple to keep the existing architecture specific management alive. A similar problem was created in the ARM world with support for IP block specific message storage. Instead of going all the way to stack a IP block specific domain on top of the generic MSI domain this ended in a construct which provides a "global" platform MSI domain which allows overriding the irq_write_msi_msg() callback per allocation. In course of the lengthy discussions we identified other abuse of the MSI infrastructure in wireless drivers, NTB etc. where support for implementation specific message storage was just mindlessly glued into the existing infrastructure. Some of this just works by chance on particular platforms but will fail in hard to diagnose ways when the driver is used on platforms where the underlying MSI interrupt management code does not expect the creative abuse. Another shortcoming of today's PCI/MSI-X support is the inability to allocate or free individual vectors after the initial enablement of MSI-X. This results in an works by chance implementation of VFIO (PCI pass-through) where interrupts on the host side are not set up upfront to avoid resource exhaustion. They are expanded at run-time when the guest actually tries to use them. The way how this is implemented is that the host disables MSI-X and then re-enables it with a larger number of vectors again. That works by chance because most device drivers set up all interrupts before the device actually will utilize them. But that's not universally true because some drivers allocate a large enough number of vectors but do not utilize them until it's actually required, e.g. for acceleration support. But at that point other interrupts of the device might be in active use and the MSI-X disable/enable dance can just result in losing interrupts and therefore hard to diagnose subtle problems. Last but not least the "global" PCI/MSI-X domain approach prevents to utilize PCI/MSI[-X] and PCI/IMS on the same device due to the fact that IMS is not longer providing a uniform storage and configuration model. The solution to this is to implement the missing step and switch from global PCI/MSI domains to per device PCI/MSI domains. The resulting hierarchy then looks like this: |--- [PCI/MSI] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N which in turn allows to provide support for multiple domains per device: |--- [PCI/MSI] device 1 |--- [PCI/IMS] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N |--- [PCI/IMS] device N This work converts the MSI and PCI/MSI core and the x86 interrupt domains to the new model, provides new interfaces for post-enable allocation/free of MSI-X interrupts and the base framework for PCI/IMS. PCI/IMS has been verified with the work in progress IDXD driver. There is work in progress to convert ARM over which will replace the platform MSI train-wreck. The cleanup of VFIO, NTB and other creative "solutions" are in the works as well. - Drivers: - Updates for the LoongArch interrupt chip drivers - Support for MTK CIRQv2 - The usual small fixes and updates all over the place -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmOUsygTHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYoYXiD/40tXKzCzf0qFIqUlZLia1N3RRrwrNC DVTixuLtR9MrjwE+jWLQILa85SHInV8syXHSd35SzhsGDxkURFGi+HBgVWmysODf br9VSh3Gi+kt7iXtIwAg8WNWviGNmS3kPksxCko54F0YnJhMY5r5bhQVUBQkwFG2 wES1C9Uzd4pdV2bl24Z+WKL85cSmZ+pHunyKw1n401lBABXnTF9c4f13zC14jd+y wDxNrmOxeL3mEH4Pg6VyrDuTOURSf3TjJjeEq3EYqvUo0FyLt9I/cKX0AELcZQX7 fkRjrQQAvXNj39RJfeSkojDfllEPUHp7XSluhdBu5aIovSamdYGCDnuEoZ+l4MJ+ CojIErp3Dwj/uSaf5c7C3OaDAqH2CpOFWIcrUebShJE60hVKLEpUwd6W8juplaoT gxyXRb1Y+BeJvO8VhMN4i7f3232+sj8wuj+HTRTTbqMhkElnin94tAx8rgwR1sgR BiOGMJi4K2Y8s9Rqqp0Dvs01CW4guIYvSR4YY+WDbbi1xgiev89OYs6zZTJCJe4Y NUwwpqYSyP1brmtdDdBOZLqegjQm+TwUb6oOaasFem4vT1swgawgLcDnPOx45bk5 /FWt3EmnZxMz99x9jdDn1+BCqAZsKyEbEY1avvhPVMTwoVIuSX2ceTBMLseGq+jM 03JfvdxnueM3gw== =9erA -----END PGP SIGNATURE----- Merge tag 'irq-core-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "Updates for the interrupt core and driver subsystem: The bulk is the rework of the MSI subsystem to support per device MSI interrupt domains. This solves conceptual problems of the current PCI/MSI design which are in the way of providing support for PCI/MSI[-X] and the upcoming PCI/IMS mechanism on the same device. IMS (Interrupt Message Store] is a new specification which allows device manufactures to provide implementation defined storage for MSI messages (as opposed to PCI/MSI and PCI/MSI-X that has a specified message store which is uniform accross all devices). The PCI/MSI[-X] uniformity allowed us to get away with "global" PCI/MSI domains. IMS not only allows to overcome the size limitations of the MSI-X table, but also gives the device manufacturer the freedom to store the message in arbitrary places, even in host memory which is shared with the device. There have been several attempts to glue this into the current MSI code, but after lengthy discussions it turned out that there is a fundamental design problem in the current PCI/MSI-X implementation. This needs some historical background. When PCI/MSI[-X] support was added around 2003, interrupt management was completely different from what we have today in the actively developed architectures. Interrupt management was completely architecture specific and while there were attempts to create common infrastructure the commonalities were rudimentary and just providing shared data structures and interfaces so that drivers could be written in an architecture agnostic way. The initial PCI/MSI[-X] support obviously plugged into this model which resulted in some basic shared infrastructure in the PCI core code for setting up MSI descriptors, which are a pure software construct for holding data relevant for a particular MSI interrupt, but the actual association to Linux interrupts was completely architecture specific. This model is still supported today to keep museum architectures and notorious stragglers alive. In 2013 Intel tried to add support for hot-pluggable IO/APICs to the kernel, which was creating yet another architecture specific mechanism and resulted in an unholy mess on top of the existing horrors of x86 interrupt handling. The x86 interrupt management code was already an incomprehensible maze of indirections between the CPU vector management, interrupt remapping and the actual IO/APIC and PCI/MSI[-X] implementation. At roughly the same time ARM struggled with the ever growing SoC specific extensions which were glued on top of the architected GIC interrupt controller. This resulted in a fundamental redesign of interrupt management and provided the today prevailing concept of hierarchical interrupt domains. This allowed to disentangle the interactions between x86 vector domain and interrupt remapping and also allowed ARM to handle the zoo of SoC specific interrupt components in a sane way. The concept of hierarchical interrupt domains aims to encapsulate the functionality of particular IP blocks which are involved in interrupt delivery so that they become extensible and pluggable. The X86 encapsulation looks like this: |--- device 1 [Vector]---[Remapping]---[PCI/MSI]--|... |--- device N where the remapping domain is an optional component and in case that it is not available the PCI/MSI[-X] domains have the vector domain as their parent. This reduced the required interaction between the domains pretty much to the initialization phase where it is obviously required to establish the proper parent relation ship in the components of the hierarchy. While in most cases the model is strictly representing the chain of IP blocks and abstracting them so they can be plugged together to form a hierarchy, the design stopped short on PCI/MSI[-X]. Looking at the hardware it's clear that the actual PCI/MSI[-X] interrupt controller is not a global entity, but strict a per PCI device entity. Here we took a short cut on the hierarchical model and went for the easy solution of providing "global" PCI/MSI domains which was possible because the PCI/MSI[-X] handling is uniform across the devices. This also allowed to keep the existing PCI/MSI[-X] infrastructure mostly unchanged which in turn made it simple to keep the existing architecture specific management alive. A similar problem was created in the ARM world with support for IP block specific message storage. Instead of going all the way to stack a IP block specific domain on top of the generic MSI domain this ended in a construct which provides a "global" platform MSI domain which allows overriding the irq_write_msi_msg() callback per allocation. In course of the lengthy discussions we identified other abuse of the MSI infrastructure in wireless drivers, NTB etc. where support for implementation specific message storage was just mindlessly glued into the existing infrastructure. Some of this just works by chance on particular platforms but will fail in hard to diagnose ways when the driver is used on platforms where the underlying MSI interrupt management code does not expect the creative abuse. Another shortcoming of today's PCI/MSI-X support is the inability to allocate or free individual vectors after the initial enablement of MSI-X. This results in an works by chance implementation of VFIO (PCI pass-through) where interrupts on the host side are not set up upfront to avoid resource exhaustion. They are expanded at run-time when the guest actually tries to use them. The way how this is implemented is that the host disables MSI-X and then re-enables it with a larger number of vectors again. That works by chance because most device drivers set up all interrupts before the device actually will utilize them. But that's not universally true because some drivers allocate a large enough number of vectors but do not utilize them until it's actually required, e.g. for acceleration support. But at that point other interrupts of the device might be in active use and the MSI-X disable/enable dance can just result in losing interrupts and therefore hard to diagnose subtle problems. Last but not least the "global" PCI/MSI-X domain approach prevents to utilize PCI/MSI[-X] and PCI/IMS on the same device due to the fact that IMS is not longer providing a uniform storage and configuration model. The solution to this is to implement the missing step and switch from global PCI/MSI domains to per device PCI/MSI domains. The resulting hierarchy then looks like this: |--- [PCI/MSI] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N which in turn allows to provide support for multiple domains per device: |--- [PCI/MSI] device 1 |--- [PCI/IMS] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N |--- [PCI/IMS] device N This work converts the MSI and PCI/MSI core and the x86 interrupt domains to the new model, provides new interfaces for post-enable allocation/free of MSI-X interrupts and the base framework for PCI/IMS. PCI/IMS has been verified with the work in progress IDXD driver. There is work in progress to convert ARM over which will replace the platform MSI train-wreck. The cleanup of VFIO, NTB and other creative "solutions" are in the works as well. Drivers: - Updates for the LoongArch interrupt chip drivers - Support for MTK CIRQv2 - The usual small fixes and updates all over the place" * tag 'irq-core-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (134 commits) irqchip/ti-sci-inta: Fix kernel doc irqchip/gic-v2m: Mark a few functions __init irqchip/gic-v2m: Include arm-gic-common.h irqchip/irq-mvebu-icu: Fix works by chance pointer assignment iommu/amd: Enable PCI/IMS iommu/vt-d: Enable PCI/IMS x86/apic/msi: Enable PCI/IMS PCI/MSI: Provide pci_ims_alloc/free_irq() PCI/MSI: Provide IMS (Interrupt Message Store) support genirq/msi: Provide constants for PCI/IMS support x86/apic/msi: Enable MSI_FLAG_PCI_MSIX_ALLOC_DYN PCI/MSI: Provide post-enable dynamic allocation interfaces for MSI-X PCI/MSI: Provide prepare_desc() MSI domain op PCI/MSI: Split MSI-X descriptor setup genirq/msi: Provide MSI_FLAG_MSIX_ALLOC_DYN genirq/msi: Provide msi_domain_alloc_irq_at() genirq/msi: Provide msi_domain_ops:: Prepare_desc() genirq/msi: Provide msi_desc:: Msi_data genirq/msi: Provide struct msi_map x86/apic/msi: Remove arch_create_remap_msi_irq_domain() ...
This commit is contained in:
commit
9d33edb20f
@ -285,3 +285,13 @@ to bridges between the PCI root and the device, MSIs are disabled.
|
||||
It is also worth checking the device driver to see whether it supports MSIs.
|
||||
For example, it may contain calls to pci_alloc_irq_vectors() with the
|
||||
PCI_IRQ_MSI or PCI_IRQ_MSIX flags.
|
||||
|
||||
|
||||
List of device drivers MSI(-X) APIs
|
||||
===================================
|
||||
|
||||
The PCI/MSI subystem has a dedicated C file for its exported device driver
|
||||
APIs — `drivers/pci/msi/api.c`. The following functions are exported:
|
||||
|
||||
.. kernel-doc:: drivers/pci/msi/api.c
|
||||
:export:
|
||||
|
@ -0,0 +1,34 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/interrupt-controller/loongarch,cpu-interrupt-controller.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: LoongArch CPU Interrupt Controller
|
||||
|
||||
maintainers:
|
||||
- Liu Peibao <liupeibao@loongson.cn>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: loongarch,cpu-interrupt-controller
|
||||
|
||||
'#interrupt-cells':
|
||||
const: 1
|
||||
|
||||
interrupt-controller: true
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- '#interrupt-cells'
|
||||
- interrupt-controller
|
||||
|
||||
examples:
|
||||
- |
|
||||
interrupt-controller {
|
||||
compatible = "loongarch,cpu-interrupt-controller";
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-controller;
|
||||
};
|
@ -1,33 +0,0 @@
|
||||
* Mediatek 27xx cirq
|
||||
|
||||
In Mediatek SOCs, the CIRQ is a low power interrupt controller designed to
|
||||
work outside MCUSYS which comprises with Cortex-Ax cores,CCI and GIC.
|
||||
The external interrupts (outside MCUSYS) will feed through CIRQ and connect
|
||||
to GIC in MCUSYS. When CIRQ is enabled, it will record the edge-sensitive
|
||||
interrupts and generate a pulse signal to parent interrupt controller when
|
||||
flush command is executed. With CIRQ, MCUSYS can be completely turned off
|
||||
to improve the system power consumption without losing interrupts.
|
||||
|
||||
Required properties:
|
||||
- compatible: should be one of
|
||||
- "mediatek,mt2701-cirq" for mt2701 CIRQ
|
||||
- "mediatek,mt8135-cirq" for mt8135 CIRQ
|
||||
- "mediatek,mt8173-cirq" for mt8173 CIRQ
|
||||
and "mediatek,cirq" as a fallback.
|
||||
- interrupt-controller : Identifies the node as an interrupt controller.
|
||||
- #interrupt-cells : Use the same format as specified by GIC in arm,gic.txt.
|
||||
- reg: Physical base address of the cirq registers and length of memory
|
||||
mapped region.
|
||||
- mediatek,ext-irq-range: Identifies external irq number range in different
|
||||
SOCs.
|
||||
|
||||
Example:
|
||||
cirq: interrupt-controller@10204000 {
|
||||
compatible = "mediatek,mt2701-cirq",
|
||||
"mediatek,mtk-cirq";
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <3>;
|
||||
interrupt-parent = <&sysirq>;
|
||||
reg = <0 0x10204000 0 0x400>;
|
||||
mediatek,ext-irq-start = <32 200>;
|
||||
};
|
@ -0,0 +1,68 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/interrupt-controller/mediatek,mtk-cirq.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: MediaTek System Interrupt Controller
|
||||
|
||||
maintainers:
|
||||
- Youlin Pei <youlin.pei@mediatek.com>
|
||||
|
||||
description:
|
||||
In MediaTek SoCs, the CIRQ is a low power interrupt controller designed to
|
||||
work outside of MCUSYS which comprises with Cortex-Ax cores, CCI and GIC.
|
||||
The external interrupts (outside MCUSYS) will feed through CIRQ and connect
|
||||
to GIC in MCUSYS. When CIRQ is enabled, it will record the edge-sensitive
|
||||
interrupts and generate a pulse signal to parent interrupt controller when
|
||||
flush command is executed. With CIRQ, MCUSYS can be completely turned off
|
||||
to improve the system power consumption without losing interrupts.
|
||||
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- enum:
|
||||
- mediatek,mt2701-cirq
|
||||
- mediatek,mt8135-cirq
|
||||
- mediatek,mt8173-cirq
|
||||
- mediatek,mt8192-cirq
|
||||
- const: mediatek,mtk-cirq
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
'#interrupt-cells':
|
||||
const: 3
|
||||
|
||||
interrupt-controller: true
|
||||
|
||||
mediatek,ext-irq-range:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32-array
|
||||
items:
|
||||
- description: First CIRQ interrupt
|
||||
- description: Last CIRQ interrupt
|
||||
description:
|
||||
Identifies the range of external interrupts in different SoCs
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- '#interrupt-cells'
|
||||
- interrupt-controller
|
||||
- mediatek,ext-irq-range
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/irq.h>
|
||||
|
||||
cirq: interrupt-controller@10204000 {
|
||||
compatible = "mediatek,mt2701-cirq", "mediatek,mtk-cirq";
|
||||
reg = <0x10204000 0x400>;
|
||||
#interrupt-cells = <3>;
|
||||
interrupt-controller;
|
||||
interrupt-parent = <&sysirq>;
|
||||
mediatek,ext-irq-range = <32 200>;
|
||||
};
|
@ -93,7 +93,7 @@ int liointc_acpi_init(struct irq_domain *parent,
|
||||
int eiointc_acpi_init(struct irq_domain *parent,
|
||||
struct acpi_madt_eio_pic *acpi_eiointc);
|
||||
|
||||
struct irq_domain *htvec_acpi_init(struct irq_domain *parent,
|
||||
int htvec_acpi_init(struct irq_domain *parent,
|
||||
struct acpi_madt_ht_pic *acpi_htvec);
|
||||
int pch_lpc_acpi_init(struct irq_domain *parent,
|
||||
struct acpi_madt_lpc_pic *acpi_pchlpc);
|
||||
|
@ -447,21 +447,18 @@ static void pseries_msi_ops_msi_free(struct irq_domain *domain,
|
||||
* RTAS can not disable one MSI at a time. It's all or nothing. Do it
|
||||
* at the end after all IRQs have been freed.
|
||||
*/
|
||||
static void pseries_msi_domain_free_irqs(struct irq_domain *domain,
|
||||
struct device *dev)
|
||||
static void pseries_msi_post_free(struct irq_domain *domain, struct device *dev)
|
||||
{
|
||||
if (WARN_ON_ONCE(!dev_is_pci(dev)))
|
||||
return;
|
||||
|
||||
__msi_domain_free_irqs(domain, dev);
|
||||
|
||||
rtas_disable_msi(to_pci_dev(dev));
|
||||
}
|
||||
|
||||
static struct msi_domain_ops pseries_pci_msi_domain_ops = {
|
||||
.msi_prepare = pseries_msi_ops_prepare,
|
||||
.msi_free = pseries_msi_ops_msi_free,
|
||||
.domain_free_irqs = pseries_msi_domain_free_irqs,
|
||||
.msi_post_free = pseries_msi_post_free,
|
||||
};
|
||||
|
||||
static void pseries_msi_shutdown(struct irq_data *d)
|
||||
|
@ -381,7 +381,6 @@ config UML_PCI_OVER_VIRTIO
|
||||
select UML_IOMEM_EMULATION
|
||||
select UML_DMA_EMULATION
|
||||
select PCI_MSI
|
||||
select PCI_MSI_IRQ_DOMAIN
|
||||
select PCI_LOCKLESS_CONFIG
|
||||
|
||||
config UML_PCI_OVER_VIRTIO_DEVICE_ID
|
||||
|
@ -7,7 +7,7 @@
|
||||
/* Generic PCI */
|
||||
#include <asm-generic/pci.h>
|
||||
|
||||
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
/*
|
||||
* This is a bit of an annoying hack, and it assumes we only have
|
||||
* the virt-pci (if anything). Which is true, but still.
|
||||
|
@ -1110,7 +1110,6 @@ config X86_LOCAL_APIC
|
||||
def_bool y
|
||||
depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
select PCI_MSI_IRQ_DOMAIN if PCI_MSI
|
||||
|
||||
config X86_IO_APIC
|
||||
def_bool y
|
||||
|
9
arch/x86/include/asm/hyperv_timer.h
Normal file
9
arch/x86/include/asm/hyperv_timer.h
Normal file
@ -0,0 +1,9 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_X86_HYPERV_TIMER_H
|
||||
#define _ASM_X86_HYPERV_TIMER_H
|
||||
|
||||
#include <asm/msr.h>
|
||||
|
||||
#define hv_get_raw_timer() rdtsc_ordered()
|
||||
|
||||
#endif
|
@ -44,10 +44,6 @@ extern int irq_remapping_reenable(int);
|
||||
extern int irq_remap_enable_fault_handling(void);
|
||||
extern void panic_if_irq_remap(const char *msg);
|
||||
|
||||
/* Create PCI MSI/MSIx irqdomain, use @parent as the parent irqdomain. */
|
||||
extern struct irq_domain *
|
||||
arch_create_remap_msi_irq_domain(struct irq_domain *par, const char *n, int id);
|
||||
|
||||
/* Get parent irqdomain for interrupt remapping irqdomain */
|
||||
static inline struct irq_domain *arch_get_ir_parent_domain(void)
|
||||
{
|
||||
|
@ -7,9 +7,7 @@
|
||||
|
||||
#ifdef CONFIG_X86_LOCAL_APIC
|
||||
enum {
|
||||
/* Allocate contiguous CPU vectors */
|
||||
X86_IRQ_ALLOC_CONTIGUOUS_VECTORS = 0x1,
|
||||
X86_IRQ_ALLOC_LEGACY = 0x2,
|
||||
X86_IRQ_ALLOC_LEGACY = 0x1,
|
||||
};
|
||||
|
||||
extern int x86_fwspec_is_ioapic(struct irq_fwspec *fwspec);
|
||||
|
@ -19,8 +19,6 @@ typedef int (*hyperv_fill_flush_list_func)(
|
||||
struct hv_guest_mapping_flush_list *flush,
|
||||
void *data);
|
||||
|
||||
#define hv_get_raw_timer() rdtsc_ordered()
|
||||
|
||||
void hyperv_vector_handler(struct pt_regs *regs);
|
||||
|
||||
#if IS_ENABLED(CONFIG_HYPERV)
|
||||
|
@ -62,4 +62,10 @@ typedef struct x86_msi_addr_hi {
|
||||
struct msi_msg;
|
||||
u32 x86_msi_msg_get_destid(struct msi_msg *msg, bool extid);
|
||||
|
||||
#define X86_VECTOR_MSI_FLAGS_SUPPORTED \
|
||||
(MSI_GENERIC_FLAGS_MASK | MSI_FLAG_PCI_MSIX | MSI_FLAG_PCI_MSIX_ALLOC_DYN)
|
||||
|
||||
#define X86_VECTOR_MSI_FLAGS_REQUIRED \
|
||||
(MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS)
|
||||
|
||||
#endif /* _ASM_X86_MSI_H */
|
||||
|
@ -21,7 +21,7 @@ struct pci_sysdata {
|
||||
#ifdef CONFIG_X86_64
|
||||
void *iommu; /* IOMMU private data */
|
||||
#endif
|
||||
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
void *fwnode; /* IRQ domain for MSI assignment */
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_VMD)
|
||||
@ -52,7 +52,7 @@ static inline int pci_proc_domain(struct pci_bus *bus)
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
static inline void *_pci_root_bus_fwnode(struct pci_bus *bus)
|
||||
{
|
||||
return to_pci_sysdata(bus)->fwnode;
|
||||
@ -92,6 +92,7 @@ void pcibios_scan_root(int bus);
|
||||
struct irq_routing_table *pcibios_get_irq_routing_table(void);
|
||||
int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq);
|
||||
|
||||
bool pci_dev_has_default_msi_parent_domain(struct pci_dev *dev);
|
||||
|
||||
#define HAVE_PCI_MMAP
|
||||
#define arch_can_pci_mmap_wc() pat_enabled()
|
||||
|
@ -142,70 +142,139 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* IRQ Chip for MSI PCI/PCI-X/PCI-Express Devices,
|
||||
* which implement the MSI or MSI-X Capability Structure.
|
||||
/**
|
||||
* pci_dev_has_default_msi_parent_domain - Check whether the device has the default
|
||||
* MSI parent domain associated
|
||||
* @dev: Pointer to the PCI device
|
||||
*/
|
||||
static struct irq_chip pci_msi_controller = {
|
||||
.name = "PCI-MSI",
|
||||
.irq_unmask = pci_msi_unmask_irq,
|
||||
.irq_mask = pci_msi_mask_irq,
|
||||
.irq_ack = irq_chip_ack_parent,
|
||||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.irq_set_affinity = msi_set_affinity,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
};
|
||||
|
||||
int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
|
||||
msi_alloc_info_t *arg)
|
||||
bool pci_dev_has_default_msi_parent_domain(struct pci_dev *dev)
|
||||
{
|
||||
init_irq_alloc_info(arg, NULL);
|
||||
if (to_pci_dev(dev)->msix_enabled) {
|
||||
arg->type = X86_IRQ_ALLOC_TYPE_PCI_MSIX;
|
||||
} else {
|
||||
arg->type = X86_IRQ_ALLOC_TYPE_PCI_MSI;
|
||||
arg->flags |= X86_IRQ_ALLOC_CONTIGUOUS_VECTORS;
|
||||
struct irq_domain *domain = dev_get_msi_domain(&dev->dev);
|
||||
|
||||
if (!domain)
|
||||
domain = dev_get_msi_domain(&dev->bus->dev);
|
||||
if (!domain)
|
||||
return false;
|
||||
|
||||
return domain == x86_vector_domain;
|
||||
}
|
||||
|
||||
/**
|
||||
* x86_msi_prepare - Setup of msi_alloc_info_t for allocations
|
||||
* @domain: The domain for which this setup happens
|
||||
* @dev: The device for which interrupts are allocated
|
||||
* @nvec: The number of vectors to allocate
|
||||
* @alloc: The allocation info structure to initialize
|
||||
*
|
||||
* This function is to be used for all types of MSI domains above the x86
|
||||
* vector domain and any intermediates. It is always invoked from the
|
||||
* top level interrupt domain. The domain specific allocation
|
||||
* functionality is determined via the @domain's bus token which allows to
|
||||
* map the X86 specific allocation type.
|
||||
*/
|
||||
static int x86_msi_prepare(struct irq_domain *domain, struct device *dev,
|
||||
int nvec, msi_alloc_info_t *alloc)
|
||||
{
|
||||
struct msi_domain_info *info = domain->host_data;
|
||||
|
||||
init_irq_alloc_info(alloc, NULL);
|
||||
|
||||
switch (info->bus_token) {
|
||||
case DOMAIN_BUS_PCI_DEVICE_MSI:
|
||||
alloc->type = X86_IRQ_ALLOC_TYPE_PCI_MSI;
|
||||
return 0;
|
||||
case DOMAIN_BUS_PCI_DEVICE_MSIX:
|
||||
case DOMAIN_BUS_PCI_DEVICE_IMS:
|
||||
alloc->type = X86_IRQ_ALLOC_TYPE_PCI_MSIX;
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_msi_prepare);
|
||||
|
||||
static struct msi_domain_ops pci_msi_domain_ops = {
|
||||
.msi_prepare = pci_msi_prepare,
|
||||
};
|
||||
/**
|
||||
* x86_init_dev_msi_info - Domain info setup for MSI domains
|
||||
* @dev: The device for which the domain should be created
|
||||
* @domain: The (root) domain providing this callback
|
||||
* @real_parent: The real parent domain of the to initialize domain
|
||||
* @info: The domain info for the to initialize domain
|
||||
*
|
||||
* This function is to be used for all types of MSI domains above the x86
|
||||
* vector domain and any intermediates. The domain specific functionality
|
||||
* is determined via the @real_parent.
|
||||
*/
|
||||
static bool x86_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
||||
struct irq_domain *real_parent, struct msi_domain_info *info)
|
||||
{
|
||||
const struct msi_parent_ops *pops = real_parent->msi_parent_ops;
|
||||
|
||||
static struct msi_domain_info pci_msi_domain_info = {
|
||||
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_PCI_MSIX,
|
||||
.ops = &pci_msi_domain_ops,
|
||||
.chip = &pci_msi_controller,
|
||||
.handler = handle_edge_irq,
|
||||
.handler_name = "edge",
|
||||
/* MSI parent domain specific settings */
|
||||
switch (real_parent->bus_token) {
|
||||
case DOMAIN_BUS_ANY:
|
||||
/* Only the vector domain can have the ANY token */
|
||||
if (WARN_ON_ONCE(domain != real_parent))
|
||||
return false;
|
||||
info->chip->irq_set_affinity = msi_set_affinity;
|
||||
/* See msi_set_affinity() for the gory details */
|
||||
info->flags |= MSI_FLAG_NOMASK_QUIRK;
|
||||
break;
|
||||
case DOMAIN_BUS_DMAR:
|
||||
case DOMAIN_BUS_AMDVI:
|
||||
break;
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Is the target supported? */
|
||||
switch(info->bus_token) {
|
||||
case DOMAIN_BUS_PCI_DEVICE_MSI:
|
||||
case DOMAIN_BUS_PCI_DEVICE_MSIX:
|
||||
break;
|
||||
case DOMAIN_BUS_PCI_DEVICE_IMS:
|
||||
if (!(pops->supported_flags & MSI_FLAG_PCI_IMS))
|
||||
return false;
|
||||
break;
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Mask out the domain specific MSI feature flags which are not
|
||||
* supported by the real parent.
|
||||
*/
|
||||
info->flags &= pops->supported_flags;
|
||||
/* Enforce the required flags */
|
||||
info->flags |= X86_VECTOR_MSI_FLAGS_REQUIRED;
|
||||
|
||||
/* This is always invoked from the top level MSI domain! */
|
||||
info->ops->msi_prepare = x86_msi_prepare;
|
||||
|
||||
info->chip->irq_ack = irq_chip_ack_parent;
|
||||
info->chip->irq_retrigger = irq_chip_retrigger_hierarchy;
|
||||
info->chip->flags |= IRQCHIP_SKIP_SET_WAKE |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP;
|
||||
|
||||
info->handler = handle_edge_irq;
|
||||
info->handler_name = "edge";
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static const struct msi_parent_ops x86_vector_msi_parent_ops = {
|
||||
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED,
|
||||
.init_dev_msi_info = x86_init_dev_msi_info,
|
||||
};
|
||||
|
||||
struct irq_domain * __init native_create_pci_msi_domain(void)
|
||||
{
|
||||
struct fwnode_handle *fn;
|
||||
struct irq_domain *d;
|
||||
|
||||
if (disable_apic)
|
||||
return NULL;
|
||||
|
||||
fn = irq_domain_alloc_named_fwnode("PCI-MSI");
|
||||
if (!fn)
|
||||
return NULL;
|
||||
|
||||
d = pci_msi_create_irq_domain(fn, &pci_msi_domain_info,
|
||||
x86_vector_domain);
|
||||
if (!d) {
|
||||
irq_domain_free_fwnode(fn);
|
||||
pr_warn("Failed to initialize PCI-MSI irqdomain.\n");
|
||||
} else {
|
||||
d->flags |= IRQ_DOMAIN_MSI_NOMASK_QUIRK;
|
||||
}
|
||||
return d;
|
||||
x86_vector_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
x86_vector_domain->msi_parent_ops = &x86_vector_msi_parent_ops;
|
||||
return x86_vector_domain;
|
||||
}
|
||||
|
||||
void __init x86_create_pci_msi_domain(void)
|
||||
@ -213,41 +282,19 @@ void __init x86_create_pci_msi_domain(void)
|
||||
x86_pci_msi_default_domain = x86_init.irqs.create_pci_msi_domain();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_IRQ_REMAP
|
||||
static struct irq_chip pci_msi_ir_controller = {
|
||||
.name = "IR-PCI-MSI",
|
||||
.irq_unmask = pci_msi_unmask_irq,
|
||||
.irq_mask = pci_msi_mask_irq,
|
||||
.irq_ack = irq_chip_ack_parent,
|
||||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
};
|
||||
|
||||
static struct msi_domain_info pci_msi_ir_domain_info = {
|
||||
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
|
||||
.ops = &pci_msi_domain_ops,
|
||||
.chip = &pci_msi_ir_controller,
|
||||
.handler = handle_edge_irq,
|
||||
.handler_name = "edge",
|
||||
};
|
||||
|
||||
struct irq_domain *arch_create_remap_msi_irq_domain(struct irq_domain *parent,
|
||||
const char *name, int id)
|
||||
/* Keep around for hyperV */
|
||||
int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
|
||||
msi_alloc_info_t *arg)
|
||||
{
|
||||
struct fwnode_handle *fn;
|
||||
struct irq_domain *d;
|
||||
init_irq_alloc_info(arg, NULL);
|
||||
|
||||
fn = irq_domain_alloc_named_id_fwnode(name, id);
|
||||
if (!fn)
|
||||
return NULL;
|
||||
d = pci_msi_create_irq_domain(fn, &pci_msi_ir_domain_info, parent);
|
||||
if (!d)
|
||||
irq_domain_free_fwnode(fn);
|
||||
return d;
|
||||
if (to_pci_dev(dev)->msix_enabled)
|
||||
arg->type = X86_IRQ_ALLOC_TYPE_PCI_MSIX;
|
||||
else
|
||||
arg->type = X86_IRQ_ALLOC_TYPE_PCI_MSI;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
EXPORT_SYMBOL_GPL(pci_msi_prepare);
|
||||
|
||||
#ifdef CONFIG_DMAR_TABLE
|
||||
/*
|
||||
|
@ -539,10 +539,6 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq,
|
||||
if (disable_apic)
|
||||
return -ENXIO;
|
||||
|
||||
/* Currently vector allocator can't guarantee contiguous allocations */
|
||||
if ((info->flags & X86_IRQ_ALLOC_CONTIGUOUS_VECTORS) && nr_irqs > 1)
|
||||
return -ENOSYS;
|
||||
|
||||
/*
|
||||
* Catch any attempt to touch the cascade interrupt on a PIC
|
||||
* equipped system.
|
||||
|
@ -387,13 +387,15 @@ int acpi_pci_irq_enable(struct pci_dev *dev)
|
||||
u8 pin;
|
||||
int triggering = ACPI_LEVEL_SENSITIVE;
|
||||
/*
|
||||
* On ARM systems with the GIC interrupt model, level interrupts
|
||||
* On ARM systems with the GIC interrupt model, or LoongArch
|
||||
* systems with the LPIC interrupt model, level interrupts
|
||||
* are always polarity high by specification; PCI legacy
|
||||
* IRQs lines are inverted before reaching the interrupt
|
||||
* controller and must therefore be considered active high
|
||||
* as default.
|
||||
*/
|
||||
int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ?
|
||||
int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ||
|
||||
acpi_irq_model == ACPI_IRQ_MODEL_LPIC ?
|
||||
ACPI_ACTIVE_HIGH : ACPI_ACTIVE_LOW;
|
||||
char *link = NULL;
|
||||
char link_desc[16];
|
||||
|
@ -22,7 +22,7 @@ obj-$(CONFIG_REGMAP) += regmap/
|
||||
obj-$(CONFIG_SOC_BUS) += soc.o
|
||||
obj-$(CONFIG_PINCTRL) += pinctrl.o
|
||||
obj-$(CONFIG_DEV_COREDUMP) += devcoredump.o
|
||||
obj-$(CONFIG_GENERIC_MSI_IRQ_DOMAIN) += platform-msi.o
|
||||
obj-$(CONFIG_GENERIC_MSI_IRQ) += platform-msi.o
|
||||
obj-$(CONFIG_GENERIC_ARCH_TOPOLOGY) += arch_topology.o
|
||||
obj-$(CONFIG_GENERIC_ARCH_NUMA) += arch_numa.o
|
||||
obj-$(CONFIG_ACPI) += physical_location.o
|
||||
|
@ -213,7 +213,7 @@ int platform_msi_domain_alloc_irqs(struct device *dev, unsigned int nvec,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = msi_domain_alloc_irqs(dev->msi.domain, dev, nvec);
|
||||
err = msi_domain_alloc_irqs_range(dev, MSI_DEFAULT_DOMAIN, 0, nvec - 1);
|
||||
if (err)
|
||||
platform_msi_free_priv_data(dev);
|
||||
|
||||
@ -227,7 +227,7 @@ EXPORT_SYMBOL_GPL(platform_msi_domain_alloc_irqs);
|
||||
*/
|
||||
void platform_msi_domain_free_irqs(struct device *dev)
|
||||
{
|
||||
msi_domain_free_irqs(dev->msi.domain, dev);
|
||||
msi_domain_free_irqs_all(dev, MSI_DEFAULT_DOMAIN);
|
||||
platform_msi_free_priv_data(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(platform_msi_domain_free_irqs);
|
||||
@ -325,7 +325,7 @@ void platform_msi_device_domain_free(struct irq_domain *domain, unsigned int vir
|
||||
|
||||
msi_lock_descs(data->dev);
|
||||
irq_domain_free_irqs_common(domain, virq, nr_irqs);
|
||||
msi_free_msi_descs_range(data->dev, MSI_DESC_ALL, virq, virq + nr_irqs - 1);
|
||||
msi_free_msi_descs_range(data->dev, virq, virq + nr_irqs - 1);
|
||||
msi_unlock_descs(data->dev);
|
||||
}
|
||||
|
||||
|
@ -8,7 +8,7 @@
|
||||
config FSL_MC_BUS
|
||||
bool "QorIQ DPAA2 fsl-mc bus driver"
|
||||
depends on OF && (ARCH_LAYERSCAPE || (COMPILE_TEST && (ARM || ARM64 || X86_LOCAL_APIC || PPC)))
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
help
|
||||
Driver to enable the bus infrastructure for the QorIQ DPAA2
|
||||
architecture. The fsl-mc bus driver handles discovery of
|
||||
|
@ -11,7 +11,6 @@
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/fsl/mc.h>
|
||||
|
||||
#include "fsl-mc-private.h"
|
||||
|
@ -17,7 +17,6 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/limits.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/iommu.h>
|
||||
|
@ -213,21 +213,8 @@ struct irq_domain *fsl_mc_find_msi_domain(struct device *dev)
|
||||
|
||||
int fsl_mc_msi_domain_alloc_irqs(struct device *dev, unsigned int irq_count)
|
||||
{
|
||||
struct irq_domain *msi_domain;
|
||||
int error;
|
||||
int error = msi_setup_device_data(dev);
|
||||
|
||||
msi_domain = dev_get_msi_domain(dev);
|
||||
if (!msi_domain)
|
||||
return -EINVAL;
|
||||
|
||||
error = msi_setup_device_data(dev);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
if (msi_first_desc(dev, MSI_DESC_ALL))
|
||||
error = -EINVAL;
|
||||
msi_unlock_descs(dev);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
@ -235,7 +222,7 @@ int fsl_mc_msi_domain_alloc_irqs(struct device *dev, unsigned int irq_count)
|
||||
* NOTE: Calling this function will trigger the invocation of the
|
||||
* its_fsl_mc_msi_prepare() callback
|
||||
*/
|
||||
error = msi_domain_alloc_irqs(msi_domain, dev, irq_count);
|
||||
error = msi_domain_alloc_irqs_range(dev, MSI_DEFAULT_DOMAIN, 0, irq_count - 1);
|
||||
|
||||
if (error)
|
||||
dev_err(dev, "Failed to allocate IRQs\n");
|
||||
@ -244,11 +231,5 @@ int fsl_mc_msi_domain_alloc_irqs(struct device *dev, unsigned int irq_count)
|
||||
|
||||
void fsl_mc_msi_domain_free_irqs(struct device *dev)
|
||||
{
|
||||
struct irq_domain *msi_domain;
|
||||
|
||||
msi_domain = dev_get_msi_domain(dev);
|
||||
if (!msi_domain)
|
||||
return;
|
||||
|
||||
msi_domain_free_irqs(msi_domain, dev);
|
||||
msi_domain_free_irqs_all(dev, MSI_DEFAULT_DOMAIN);
|
||||
}
|
||||
|
@ -462,7 +462,7 @@ config MV_XOR_V2
|
||||
select DMA_ENGINE
|
||||
select DMA_ENGINE_RAID
|
||||
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
help
|
||||
Enable support for the Marvell version 2 XOR engine.
|
||||
|
||||
|
@ -610,7 +610,7 @@ static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
|
||||
return hidma_ll_inthandler(chirq, lldev);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
static irqreturn_t hidma_chirq_handler_msi(int chirq, void *arg)
|
||||
{
|
||||
struct hidma_lldev **lldevp = arg;
|
||||
@ -671,7 +671,7 @@ static int hidma_sysfs_init(struct hidma_dev *dev)
|
||||
return device_create_file(dev->ddev.dev, dev->chid_attrs);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
static void hidma_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
|
||||
{
|
||||
struct device *dev = msi_desc_to_dev(desc);
|
||||
@ -687,7 +687,7 @@ static void hidma_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
|
||||
|
||||
static void hidma_free_msis(struct hidma_dev *dmadev)
|
||||
{
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
struct device *dev = dmadev->ddev.dev;
|
||||
int i, virq;
|
||||
|
||||
@ -704,7 +704,7 @@ static void hidma_free_msis(struct hidma_dev *dmadev)
|
||||
static int hidma_request_msi(struct hidma_dev *dmadev,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
int rc, i, virq;
|
||||
|
||||
rc = platform_msi_domain_alloc_irqs(&pdev->dev, HIDMA_MSI_INTS,
|
||||
|
@ -36,6 +36,7 @@
|
||||
#include <linux/dma-map-ops.h>
|
||||
#include <linux/pci.h>
|
||||
#include <clocksource/hyperv_timer.h>
|
||||
#include <asm/mshyperv.h>
|
||||
#include "hyperv_vmbus.h"
|
||||
|
||||
struct vmbus_dynid {
|
||||
|
@ -389,7 +389,7 @@ config ARM_SMMU_V3
|
||||
depends on ARM64
|
||||
select IOMMU_API
|
||||
select IOMMU_IO_PGTABLE_LPAE
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
help
|
||||
Support for implementations of the ARM System MMU architecture
|
||||
version 3 providing translation support to a PCIe root complex.
|
||||
|
@ -734,7 +734,6 @@ struct amd_iommu {
|
||||
u8 max_counters;
|
||||
#ifdef CONFIG_IRQ_REMAP
|
||||
struct irq_domain *ir_domain;
|
||||
struct irq_domain *msi_domain;
|
||||
|
||||
struct amd_irte_ops *irte_ops;
|
||||
#endif
|
||||
|
@ -812,10 +812,10 @@ static void
|
||||
amd_iommu_set_pci_msi_domain(struct device *dev, struct amd_iommu *iommu)
|
||||
{
|
||||
if (!irq_remapping_enabled || !dev_is_pci(dev) ||
|
||||
pci_dev_has_special_msi_domain(to_pci_dev(dev)))
|
||||
!pci_dev_has_default_msi_parent_domain(to_pci_dev(dev)))
|
||||
return;
|
||||
|
||||
dev_set_msi_domain(dev, iommu->msi_domain);
|
||||
dev_set_msi_domain(dev, iommu->ir_domain);
|
||||
}
|
||||
|
||||
#else /* CONFIG_IRQ_REMAP */
|
||||
@ -3294,17 +3294,9 @@ static int irq_remapping_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
|
||||
if (!info)
|
||||
return -EINVAL;
|
||||
if (nr_irqs > 1 && info->type != X86_IRQ_ALLOC_TYPE_PCI_MSI &&
|
||||
info->type != X86_IRQ_ALLOC_TYPE_PCI_MSIX)
|
||||
if (nr_irqs > 1 && info->type != X86_IRQ_ALLOC_TYPE_PCI_MSI)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* With IRQ remapping enabled, don't need contiguous CPU vectors
|
||||
* to support multiple MSI interrupts.
|
||||
*/
|
||||
if (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI)
|
||||
info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS;
|
||||
|
||||
sbdf = get_devid(info);
|
||||
if (sbdf < 0)
|
||||
return -EINVAL;
|
||||
@ -3656,6 +3648,21 @@ static struct irq_chip amd_ir_chip = {
|
||||
.irq_compose_msi_msg = ir_compose_msi_msg,
|
||||
};
|
||||
|
||||
static const struct msi_parent_ops amdvi_msi_parent_ops = {
|
||||
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
|
||||
MSI_FLAG_MULTI_PCI_MSI |
|
||||
MSI_FLAG_PCI_IMS,
|
||||
.prefix = "IR-",
|
||||
.init_dev_msi_info = msi_parent_init_dev_msi_info,
|
||||
};
|
||||
|
||||
static const struct msi_parent_ops virt_amdvi_msi_parent_ops = {
|
||||
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
|
||||
MSI_FLAG_MULTI_PCI_MSI,
|
||||
.prefix = "vIR-",
|
||||
.init_dev_msi_info = msi_parent_init_dev_msi_info,
|
||||
};
|
||||
|
||||
int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
|
||||
{
|
||||
struct fwnode_handle *fn;
|
||||
@ -3663,16 +3670,21 @@ int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
|
||||
fn = irq_domain_alloc_named_id_fwnode("AMD-IR", iommu->index);
|
||||
if (!fn)
|
||||
return -ENOMEM;
|
||||
iommu->ir_domain = irq_domain_create_tree(fn, &amd_ir_domain_ops, iommu);
|
||||
iommu->ir_domain = irq_domain_create_hierarchy(arch_get_ir_parent_domain(), 0, 0,
|
||||
fn, &amd_ir_domain_ops, iommu);
|
||||
if (!iommu->ir_domain) {
|
||||
irq_domain_free_fwnode(fn);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
iommu->ir_domain->parent = arch_get_ir_parent_domain();
|
||||
iommu->msi_domain = arch_create_remap_msi_irq_domain(iommu->ir_domain,
|
||||
"AMD-IR-MSI",
|
||||
iommu->index);
|
||||
irq_domain_update_bus_token(iommu->ir_domain, DOMAIN_BUS_AMDVI);
|
||||
iommu->ir_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
|
||||
if (amd_iommu_np_cache)
|
||||
iommu->ir_domain->msi_parent_ops = &virt_amdvi_msi_parent_ops;
|
||||
else
|
||||
iommu->ir_domain->msi_parent_ops = &amdvi_msi_parent_ops;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -600,7 +600,6 @@ struct intel_iommu {
|
||||
#ifdef CONFIG_IRQ_REMAP
|
||||
struct ir_table *ir_table; /* Interrupt remapping info */
|
||||
struct irq_domain *ir_domain;
|
||||
struct irq_domain *ir_msi_domain;
|
||||
#endif
|
||||
struct iommu_device iommu; /* IOMMU core code handle */
|
||||
int node;
|
||||
|
@ -82,6 +82,7 @@ static const struct irq_domain_ops intel_ir_domain_ops;
|
||||
|
||||
static void iommu_disable_irq_remapping(struct intel_iommu *iommu);
|
||||
static int __init parse_ioapics_under_ir(void);
|
||||
static const struct msi_parent_ops dmar_msi_parent_ops, virt_dmar_msi_parent_ops;
|
||||
|
||||
static bool ir_pre_enabled(struct intel_iommu *iommu)
|
||||
{
|
||||
@ -230,7 +231,7 @@ static struct irq_domain *map_dev_to_ir(struct pci_dev *dev)
|
||||
{
|
||||
struct dmar_drhd_unit *drhd = dmar_find_matched_drhd_unit(dev);
|
||||
|
||||
return drhd ? drhd->iommu->ir_msi_domain : NULL;
|
||||
return drhd ? drhd->iommu->ir_domain : NULL;
|
||||
}
|
||||
|
||||
static int clear_entries(struct irq_2_iommu *irq_iommu)
|
||||
@ -573,10 +574,14 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
||||
pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
|
||||
goto out_free_fwnode;
|
||||
}
|
||||
iommu->ir_msi_domain =
|
||||
arch_create_remap_msi_irq_domain(iommu->ir_domain,
|
||||
"INTEL-IR-MSI",
|
||||
iommu->seq_id);
|
||||
|
||||
irq_domain_update_bus_token(iommu->ir_domain, DOMAIN_BUS_DMAR);
|
||||
iommu->ir_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
|
||||
if (cap_caching_mode(iommu->cap))
|
||||
iommu->ir_domain->msi_parent_ops = &virt_dmar_msi_parent_ops;
|
||||
else
|
||||
iommu->ir_domain->msi_parent_ops = &dmar_msi_parent_ops;
|
||||
|
||||
ir_table->base = page_address(pages);
|
||||
ir_table->bitmap = bitmap;
|
||||
@ -620,9 +625,6 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
||||
return 0;
|
||||
|
||||
out_free_ir_domain:
|
||||
if (iommu->ir_msi_domain)
|
||||
irq_domain_remove(iommu->ir_msi_domain);
|
||||
iommu->ir_msi_domain = NULL;
|
||||
irq_domain_remove(iommu->ir_domain);
|
||||
iommu->ir_domain = NULL;
|
||||
out_free_fwnode:
|
||||
@ -644,13 +646,6 @@ static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
|
||||
struct fwnode_handle *fn;
|
||||
|
||||
if (iommu && iommu->ir_table) {
|
||||
if (iommu->ir_msi_domain) {
|
||||
fn = iommu->ir_msi_domain->fwnode;
|
||||
|
||||
irq_domain_remove(iommu->ir_msi_domain);
|
||||
irq_domain_free_fwnode(fn);
|
||||
iommu->ir_msi_domain = NULL;
|
||||
}
|
||||
if (iommu->ir_domain) {
|
||||
fn = iommu->ir_domain->fwnode;
|
||||
|
||||
@ -1107,7 +1102,7 @@ error:
|
||||
*/
|
||||
void intel_irq_remap_add_device(struct dmar_pci_notify_info *info)
|
||||
{
|
||||
if (!irq_remapping_enabled || pci_dev_has_special_msi_domain(info->dev))
|
||||
if (!irq_remapping_enabled || !pci_dev_has_default_msi_parent_domain(info->dev))
|
||||
return;
|
||||
|
||||
dev_set_msi_domain(&info->dev->dev, map_dev_to_ir(info->dev));
|
||||
@ -1334,17 +1329,9 @@ static int intel_irq_remapping_alloc(struct irq_domain *domain,
|
||||
|
||||
if (!info || !iommu)
|
||||
return -EINVAL;
|
||||
if (nr_irqs > 1 && info->type != X86_IRQ_ALLOC_TYPE_PCI_MSI &&
|
||||
info->type != X86_IRQ_ALLOC_TYPE_PCI_MSIX)
|
||||
if (nr_irqs > 1 && info->type != X86_IRQ_ALLOC_TYPE_PCI_MSI)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* With IRQ remapping enabled, don't need contiguous CPU vectors
|
||||
* to support multiple MSI interrupts.
|
||||
*/
|
||||
if (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI)
|
||||
info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS;
|
||||
|
||||
ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, arg);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -1445,6 +1432,21 @@ static const struct irq_domain_ops intel_ir_domain_ops = {
|
||||
.deactivate = intel_irq_remapping_deactivate,
|
||||
};
|
||||
|
||||
static const struct msi_parent_ops dmar_msi_parent_ops = {
|
||||
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
|
||||
MSI_FLAG_MULTI_PCI_MSI |
|
||||
MSI_FLAG_PCI_IMS,
|
||||
.prefix = "IR-",
|
||||
.init_dev_msi_info = msi_parent_init_dev_msi_info,
|
||||
};
|
||||
|
||||
static const struct msi_parent_ops virt_dmar_msi_parent_ops = {
|
||||
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
|
||||
MSI_FLAG_MULTI_PCI_MSI,
|
||||
.prefix = "vIR-",
|
||||
.init_dev_msi_info = msi_parent_init_dev_msi_info,
|
||||
};
|
||||
|
||||
/*
|
||||
* Support of Interrupt Remapping Unit Hotplug
|
||||
*/
|
||||
|
@ -9,7 +9,6 @@
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/limits.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_iommu.h>
|
||||
#include <linux/of_pci.h>
|
||||
|
@ -38,7 +38,7 @@ config ARM_GIC_V3
|
||||
|
||||
config ARM_GIC_V3_ITS
|
||||
bool
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
default ARM_GIC_V3
|
||||
|
||||
config ARM_GIC_V3_ITS_PCI
|
||||
@ -86,7 +86,7 @@ config ALPINE_MSI
|
||||
|
||||
config AL_FIC
|
||||
bool "Amazon's Annapurna Labs Fabric Interrupt Controller"
|
||||
depends on OF || COMPILE_TEST
|
||||
depends on OF
|
||||
select GENERIC_IRQ_CHIP
|
||||
select IRQ_DOMAIN
|
||||
help
|
||||
@ -375,7 +375,7 @@ config MVEBU_ICU
|
||||
|
||||
config MVEBU_ODMI
|
||||
bool
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
|
||||
config MVEBU_PIC
|
||||
bool
|
||||
@ -488,7 +488,7 @@ config IMX_MU_MSI
|
||||
default m if ARCH_MXC
|
||||
select IRQ_DOMAIN
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
help
|
||||
Provide a driver for the i.MX Messaging Unit block used as a
|
||||
CPU-to-CPU MSI controller. This requires a specially crafted DT
|
||||
@ -576,6 +576,7 @@ config IRQ_LOONGARCH_CPU
|
||||
select GENERIC_IRQ_CHIP
|
||||
select IRQ_DOMAIN
|
||||
select GENERIC_IRQ_EFFECTIVE_AFF_MASK
|
||||
select LOONGSON_HTVEC
|
||||
select LOONGSON_LIOINTC
|
||||
select LOONGSON_EIOINTC
|
||||
select LOONGSON_PCH_PIC
|
||||
|
@ -248,14 +248,14 @@ struct aic_info {
|
||||
bool fast_ipi;
|
||||
};
|
||||
|
||||
static const struct aic_info aic1_info = {
|
||||
static const struct aic_info aic1_info __initconst = {
|
||||
.version = 1,
|
||||
|
||||
.event = AIC_EVENT,
|
||||
.target_cpu = AIC_TARGET_CPU,
|
||||
};
|
||||
|
||||
static const struct aic_info aic1_fipi_info = {
|
||||
static const struct aic_info aic1_fipi_info __initconst = {
|
||||
.version = 1,
|
||||
|
||||
.event = AIC_EVENT,
|
||||
@ -264,7 +264,7 @@ static const struct aic_info aic1_fipi_info = {
|
||||
.fast_ipi = true,
|
||||
};
|
||||
|
||||
static const struct aic_info aic2_info = {
|
||||
static const struct aic_info aic2_info __initconst = {
|
||||
.version = 2,
|
||||
|
||||
.irq_cfg = AIC2_IRQ_CFG,
|
||||
|
@ -102,7 +102,7 @@ static int gic_probe(struct platform_device *pdev)
|
||||
|
||||
pm_runtime_enable(dev);
|
||||
|
||||
ret = pm_runtime_get_sync(dev);
|
||||
ret = pm_runtime_resume_and_get(dev);
|
||||
if (ret < 0)
|
||||
goto rpm_disable;
|
||||
|
||||
|
@ -24,6 +24,7 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/irqchip/arm-gic.h>
|
||||
#include <linux/irqchip/arm-gic-common.h>
|
||||
|
||||
/*
|
||||
* MSI_TYPER:
|
||||
@ -262,7 +263,7 @@ static struct msi_domain_info gicv2m_pmsi_domain_info = {
|
||||
.chip = &gicv2m_pmsi_irq_chip,
|
||||
};
|
||||
|
||||
static void gicv2m_teardown(void)
|
||||
static void __init gicv2m_teardown(void)
|
||||
{
|
||||
struct v2m_data *v2m, *tmp;
|
||||
|
||||
@ -277,7 +278,7 @@ static void gicv2m_teardown(void)
|
||||
}
|
||||
}
|
||||
|
||||
static int gicv2m_allocate_domains(struct irq_domain *parent)
|
||||
static __init int gicv2m_allocate_domains(struct irq_domain *parent)
|
||||
{
|
||||
struct irq_domain *inner_domain, *pci_domain, *plat_domain;
|
||||
struct v2m_data *v2m;
|
||||
@ -404,7 +405,7 @@ err_free_v2m:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct of_device_id gicv2m_device_id[] = {
|
||||
static __initconst struct of_device_id gicv2m_device_id[] = {
|
||||
{ .compatible = "arm,gic-v2m-frame", },
|
||||
{},
|
||||
};
|
||||
@ -454,7 +455,7 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle,
|
||||
#ifdef CONFIG_ACPI
|
||||
static int acpi_num_msi;
|
||||
|
||||
static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
static __init struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
{
|
||||
struct v2m_data *data;
|
||||
|
||||
@ -469,7 +470,7 @@ static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
return data->fwnode;
|
||||
}
|
||||
|
||||
static bool acpi_check_amazon_graviton_quirks(void)
|
||||
static __init bool acpi_check_amazon_graviton_quirks(void)
|
||||
{
|
||||
static struct acpi_table_madt *madt;
|
||||
acpi_status status;
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/kstrtox.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
@ -1171,7 +1172,7 @@ static bool gicv3_nolpi;
|
||||
|
||||
static int __init gicv3_nolpi_cfg(char *buf)
|
||||
{
|
||||
return strtobool(buf, &gicv3_nolpi);
|
||||
return kstrtobool(buf, &gicv3_nolpi);
|
||||
}
|
||||
early_param("irqchip.gicv3_nolpi", gicv3_nolpi_cfg);
|
||||
|
||||
|
@ -19,6 +19,7 @@
|
||||
*/
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kstrtox.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/list.h>
|
||||
@ -401,8 +402,8 @@ static void gic_irq_print_chip(struct irq_data *d, struct seq_file *p)
|
||||
{
|
||||
struct gic_chip_data *gic = irq_data_get_irq_chip_data(d);
|
||||
|
||||
if (gic->domain->dev)
|
||||
seq_printf(p, gic->domain->dev->of_node->name);
|
||||
if (gic->domain->pm_dev)
|
||||
seq_printf(p, gic->domain->pm_dev->of_node->name);
|
||||
else
|
||||
seq_printf(p, "GIC-%d", (int)(gic - &gic_data[0]));
|
||||
}
|
||||
@ -1332,7 +1333,7 @@ static bool gicv2_force_probe;
|
||||
|
||||
static int __init gicv2_force_probe_cfg(char *buf)
|
||||
{
|
||||
return strtobool(buf, &gicv2_force_probe);
|
||||
return kstrtobool(buf, &gicv2_force_probe);
|
||||
}
|
||||
early_param("irqchip.gicv2_force_probe", gicv2_force_probe_cfg);
|
||||
|
||||
|
@ -92,8 +92,25 @@ static const struct irq_domain_ops loongarch_cpu_intc_irq_domain_ops = {
|
||||
.xlate = irq_domain_xlate_onecell,
|
||||
};
|
||||
|
||||
static int __init
|
||||
liointc_parse_madt(union acpi_subtable_headers *header,
|
||||
#ifdef CONFIG_OF
|
||||
static int __init cpuintc_of_init(struct device_node *of_node,
|
||||
struct device_node *parent)
|
||||
{
|
||||
cpuintc_handle = of_node_to_fwnode(of_node);
|
||||
|
||||
irq_domain = irq_domain_create_linear(cpuintc_handle, EXCCODE_INT_NUM,
|
||||
&loongarch_cpu_intc_irq_domain_ops, NULL);
|
||||
if (!irq_domain)
|
||||
panic("Failed to add irqdomain for loongarch CPU");
|
||||
|
||||
set_handle_irq(&handle_cpu_irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
IRQCHIP_DECLARE(cpu_intc, "loongson,cpu-interrupt-controller", cpuintc_of_init);
|
||||
#endif
|
||||
|
||||
static int __init liointc_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_lio_pic *liointc_entry = (struct acpi_madt_lio_pic *)header;
|
||||
@ -101,8 +118,7 @@ liointc_parse_madt(union acpi_subtable_headers *header,
|
||||
return liointc_acpi_init(irq_domain, liointc_entry);
|
||||
}
|
||||
|
||||
static int __init
|
||||
eiointc_parse_madt(union acpi_subtable_headers *header,
|
||||
static int __init eiointc_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_eio_pic *eiointc_entry = (struct acpi_madt_eio_pic *)header;
|
||||
@ -112,16 +128,24 @@ eiointc_parse_madt(union acpi_subtable_headers *header,
|
||||
|
||||
static int __init acpi_cascade_irqdomain_init(void)
|
||||
{
|
||||
acpi_table_parse_madt(ACPI_MADT_TYPE_LIO_PIC,
|
||||
liointc_parse_madt, 0);
|
||||
acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC,
|
||||
eiointc_parse_madt, 0);
|
||||
int r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_LIO_PIC, liointc_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC, eiointc_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init cpuintc_acpi_init(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (irq_domain)
|
||||
return 0;
|
||||
|
||||
@ -139,9 +163,9 @@ static int __init cpuintc_acpi_init(union acpi_subtable_headers *header,
|
||||
set_handle_irq(&handle_cpu_irq);
|
||||
acpi_set_irq_model(ACPI_IRQ_MODEL_LPIC, lpic_get_gsi_domain_id);
|
||||
acpi_set_gsi_to_irq_fallback(lpic_gsi_to_irq);
|
||||
acpi_cascade_irqdomain_init();
|
||||
ret = acpi_cascade_irqdomain_init();
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
IRQCHIP_ACPI_DECLARE(cpuintc_v1, ACPI_MADT_TYPE_CORE_PIC,
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
|
||||
#define EIOINTC_REG_NODEMAP 0x14a0
|
||||
#define EIOINTC_REG_IPMAP 0x14c0
|
||||
@ -301,8 +302,38 @@ static struct irq_domain *acpi_get_vec_parent(int node, struct acpi_vector_group
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int __init
|
||||
pch_pic_parse_madt(union acpi_subtable_headers *header,
|
||||
static int eiointc_suspend(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void eiointc_resume(void)
|
||||
{
|
||||
int i, j;
|
||||
struct irq_desc *desc;
|
||||
struct irq_data *irq_data;
|
||||
|
||||
eiointc_router_init(0);
|
||||
|
||||
for (i = 0; i < nr_pics; i++) {
|
||||
for (j = 0; j < VEC_COUNT; j++) {
|
||||
desc = irq_resolve_mapping(eiointc_priv[i]->eiointc_domain, j);
|
||||
if (desc && desc->handle_irq && desc->handle_irq != handle_bad_irq) {
|
||||
raw_spin_lock(&desc->lock);
|
||||
irq_data = &desc->irq_data;
|
||||
eiointc_set_irq_affinity(irq_data, irq_data->common->affinity, 0);
|
||||
raw_spin_unlock(&desc->lock);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static struct syscore_ops eiointc_syscore_ops = {
|
||||
.suspend = eiointc_suspend,
|
||||
.resume = eiointc_resume,
|
||||
};
|
||||
|
||||
static int __init pch_pic_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_bio_pic *pchpic_entry = (struct acpi_madt_bio_pic *)header;
|
||||
@ -315,8 +346,7 @@ pch_pic_parse_madt(union acpi_subtable_headers *header,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int __init
|
||||
pch_msi_parse_madt(union acpi_subtable_headers *header,
|
||||
static int __init pch_msi_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_msi_pic *pchmsi_entry = (struct acpi_madt_msi_pic *)header;
|
||||
@ -330,17 +360,23 @@ pch_msi_parse_madt(union acpi_subtable_headers *header,
|
||||
|
||||
static int __init acpi_cascade_irqdomain_init(void)
|
||||
{
|
||||
acpi_table_parse_madt(ACPI_MADT_TYPE_BIO_PIC,
|
||||
pch_pic_parse_madt, 0);
|
||||
acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC,
|
||||
pch_msi_parse_madt, 1);
|
||||
int r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_BIO_PIC, pch_pic_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC, pch_msi_parse_madt, 1);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int __init eiointc_acpi_init(struct irq_domain *parent,
|
||||
struct acpi_madt_eio_pic *acpi_eiointc)
|
||||
{
|
||||
int i, parent_irq;
|
||||
int i, ret, parent_irq;
|
||||
unsigned long node_map;
|
||||
struct eiointc_priv *priv;
|
||||
|
||||
@ -380,15 +416,16 @@ int __init eiointc_acpi_init(struct irq_domain *parent,
|
||||
parent_irq = irq_create_mapping(parent, acpi_eiointc->cascade);
|
||||
irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch, priv);
|
||||
|
||||
register_syscore_ops(&eiointc_syscore_ops);
|
||||
cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING,
|
||||
"irqchip/loongarch/intc:starting",
|
||||
eiointc_router_init, NULL);
|
||||
|
||||
acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, pch_group);
|
||||
acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, msi_group);
|
||||
acpi_cascade_irqdomain_init();
|
||||
ret = acpi_cascade_irqdomain_init();
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
|
||||
out_free_handle:
|
||||
irq_domain_free_fwnode(priv->domain_handle);
|
||||
|
@ -16,11 +16,11 @@
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
|
||||
/* Registers */
|
||||
#define HTVEC_EN_OFF 0x20
|
||||
#define HTVEC_MAX_PARENT_IRQ 8
|
||||
|
||||
#define VEC_COUNT_PER_REG 32
|
||||
#define VEC_REG_IDX(irq_id) ((irq_id) / VEC_COUNT_PER_REG)
|
||||
#define VEC_REG_BIT(irq_id) ((irq_id) % VEC_COUNT_PER_REG)
|
||||
@ -30,8 +30,11 @@ struct htvec {
|
||||
void __iomem *base;
|
||||
struct irq_domain *htvec_domain;
|
||||
raw_spinlock_t htvec_lock;
|
||||
u32 saved_vec_en[HTVEC_MAX_PARENT_IRQ];
|
||||
};
|
||||
|
||||
static struct htvec *htvec_priv;
|
||||
|
||||
static void htvec_irq_dispatch(struct irq_desc *desc)
|
||||
{
|
||||
int i;
|
||||
@ -155,64 +158,169 @@ static void htvec_reset(struct htvec *priv)
|
||||
}
|
||||
}
|
||||
|
||||
static int htvec_of_init(struct device_node *node,
|
||||
struct device_node *parent)
|
||||
static int htvec_suspend(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < htvec_priv->num_parents; i++)
|
||||
htvec_priv->saved_vec_en[i] = readl(htvec_priv->base + HTVEC_EN_OFF + 4 * i);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void htvec_resume(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < htvec_priv->num_parents; i++)
|
||||
writel(htvec_priv->saved_vec_en[i], htvec_priv->base + HTVEC_EN_OFF + 4 * i);
|
||||
}
|
||||
|
||||
static struct syscore_ops htvec_syscore_ops = {
|
||||
.suspend = htvec_suspend,
|
||||
.resume = htvec_resume,
|
||||
};
|
||||
|
||||
static int htvec_init(phys_addr_t addr, unsigned long size,
|
||||
int num_parents, int parent_irq[], struct fwnode_handle *domain_handle)
|
||||
{
|
||||
int i;
|
||||
struct htvec *priv;
|
||||
int err, parent_irq[8], i;
|
||||
|
||||
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->num_parents = num_parents;
|
||||
priv->base = ioremap(addr, size);
|
||||
raw_spin_lock_init(&priv->htvec_lock);
|
||||
priv->base = of_iomap(node, 0);
|
||||
if (!priv->base) {
|
||||
err = -ENOMEM;
|
||||
goto free_priv;
|
||||
|
||||
/* Setup IRQ domain */
|
||||
priv->htvec_domain = irq_domain_create_linear(domain_handle,
|
||||
(VEC_COUNT_PER_REG * priv->num_parents),
|
||||
&htvec_domain_ops, priv);
|
||||
if (!priv->htvec_domain) {
|
||||
pr_err("loongson-htvec: cannot add IRQ domain\n");
|
||||
goto iounmap_base;
|
||||
}
|
||||
|
||||
htvec_reset(priv);
|
||||
|
||||
for (i = 0; i < priv->num_parents; i++) {
|
||||
irq_set_chained_handler_and_data(parent_irq[i],
|
||||
htvec_irq_dispatch, priv);
|
||||
}
|
||||
|
||||
htvec_priv = priv;
|
||||
|
||||
register_syscore_ops(&htvec_syscore_ops);
|
||||
|
||||
return 0;
|
||||
|
||||
iounmap_base:
|
||||
iounmap(priv->base);
|
||||
kfree(priv);
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
||||
static int htvec_of_init(struct device_node *node,
|
||||
struct device_node *parent)
|
||||
{
|
||||
int i, err;
|
||||
int parent_irq[8];
|
||||
int num_parents = 0;
|
||||
struct resource res;
|
||||
|
||||
if (of_address_to_resource(node, 0, &res))
|
||||
return -EINVAL;
|
||||
|
||||
/* Interrupt may come from any of the 8 interrupt lines */
|
||||
for (i = 0; i < HTVEC_MAX_PARENT_IRQ; i++) {
|
||||
parent_irq[i] = irq_of_parse_and_map(node, i);
|
||||
if (parent_irq[i] <= 0)
|
||||
break;
|
||||
|
||||
priv->num_parents++;
|
||||
num_parents++;
|
||||
}
|
||||
|
||||
if (!priv->num_parents) {
|
||||
pr_err("Failed to get parent irqs\n");
|
||||
err = -ENODEV;
|
||||
goto iounmap_base;
|
||||
}
|
||||
|
||||
priv->htvec_domain = irq_domain_create_linear(of_node_to_fwnode(node),
|
||||
(VEC_COUNT_PER_REG * priv->num_parents),
|
||||
&htvec_domain_ops, priv);
|
||||
if (!priv->htvec_domain) {
|
||||
pr_err("Failed to create IRQ domain\n");
|
||||
err = -ENOMEM;
|
||||
goto irq_dispose;
|
||||
}
|
||||
|
||||
htvec_reset(priv);
|
||||
|
||||
for (i = 0; i < priv->num_parents; i++)
|
||||
irq_set_chained_handler_and_data(parent_irq[i],
|
||||
htvec_irq_dispatch, priv);
|
||||
err = htvec_init(res.start, resource_size(&res),
|
||||
num_parents, parent_irq, of_node_to_fwnode(node));
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
|
||||
irq_dispose:
|
||||
for (; i > 0; i--)
|
||||
irq_dispose_mapping(parent_irq[i - 1]);
|
||||
iounmap_base:
|
||||
iounmap(priv->base);
|
||||
free_priv:
|
||||
kfree(priv);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
IRQCHIP_DECLARE(htvec, "loongson,htvec-1.0", htvec_of_init);
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static int __init pch_pic_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_bio_pic *pchpic_entry = (struct acpi_madt_bio_pic *)header;
|
||||
|
||||
return pch_pic_acpi_init(htvec_priv->htvec_domain, pchpic_entry);
|
||||
}
|
||||
|
||||
static int __init pch_msi_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_msi_pic *pchmsi_entry = (struct acpi_madt_msi_pic *)header;
|
||||
|
||||
return pch_msi_acpi_init(htvec_priv->htvec_domain, pchmsi_entry);
|
||||
}
|
||||
|
||||
static int __init acpi_cascade_irqdomain_init(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_BIO_PIC, pch_pic_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC, pch_msi_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int __init htvec_acpi_init(struct irq_domain *parent,
|
||||
struct acpi_madt_ht_pic *acpi_htvec)
|
||||
{
|
||||
int i, ret;
|
||||
int num_parents, parent_irq[8];
|
||||
struct fwnode_handle *domain_handle;
|
||||
|
||||
if (!acpi_htvec)
|
||||
return -EINVAL;
|
||||
|
||||
num_parents = HTVEC_MAX_PARENT_IRQ;
|
||||
|
||||
domain_handle = irq_domain_alloc_fwnode(&acpi_htvec->address);
|
||||
if (!domain_handle) {
|
||||
pr_err("Unable to allocate domain handle\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Interrupt may come from any of the 8 interrupt lines */
|
||||
for (i = 0; i < HTVEC_MAX_PARENT_IRQ; i++)
|
||||
parent_irq[i] = irq_create_mapping(parent, acpi_htvec->cascade[i]);
|
||||
|
||||
ret = htvec_init(acpi_htvec->address, acpi_htvec->size,
|
||||
num_parents, parent_irq, domain_handle);
|
||||
|
||||
if (ret == 0)
|
||||
ret = acpi_cascade_irqdomain_init();
|
||||
else
|
||||
irq_domain_free_fwnode(domain_handle);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -167,7 +167,12 @@ static int liointc_domain_xlate(struct irq_domain *d, struct device_node *ctrlr,
|
||||
if (WARN_ON(intsize < 1))
|
||||
return -EINVAL;
|
||||
*out_hwirq = intspec[0] - GSI_MIN_CPU_IRQ;
|
||||
|
||||
if (intsize > 1)
|
||||
*out_type = intspec[1] & IRQ_TYPE_SENSE_MASK;
|
||||
else
|
||||
*out_type = IRQ_TYPE_NONE;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -207,10 +212,13 @@ static int liointc_init(phys_addr_t addr, unsigned long size, int revision,
|
||||
"reg-names", core_reg_names[i]);
|
||||
|
||||
if (index < 0)
|
||||
goto out_iounmap;
|
||||
continue;
|
||||
|
||||
priv->core_isr[i] = of_iomap(node, index);
|
||||
}
|
||||
|
||||
if (!priv->core_isr[0])
|
||||
goto out_iounmap;
|
||||
}
|
||||
|
||||
/* Setup IRQ domain */
|
||||
@ -349,6 +357,26 @@ IRQCHIP_DECLARE(loongson_liointc_2_0, "loongson,liointc-2.0", liointc_of_init);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static int __init htintc_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_ht_pic *htintc_entry = (struct acpi_madt_ht_pic *)header;
|
||||
struct irq_domain *parent = irq_find_matching_fwnode(liointc_handle, DOMAIN_BUS_ANY);
|
||||
|
||||
return htvec_acpi_init(parent, htintc_entry);
|
||||
}
|
||||
|
||||
static int __init acpi_cascade_irqdomain_init(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_HT_PIC, htintc_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int __init liointc_acpi_init(struct irq_domain *parent, struct acpi_madt_lio_pic *acpi_liointc)
|
||||
{
|
||||
int ret;
|
||||
@ -365,9 +393,12 @@ int __init liointc_acpi_init(struct irq_domain *parent, struct acpi_madt_lio_pic
|
||||
pr_err("Unable to allocate domain handle\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = liointc_init(acpi_liointc->address, acpi_liointc->size,
|
||||
1, domain_handle, NULL);
|
||||
if (ret)
|
||||
if (ret == 0)
|
||||
ret = acpi_cascade_irqdomain_init();
|
||||
else
|
||||
irq_domain_free_fwnode(domain_handle);
|
||||
|
||||
return ret;
|
||||
|
@ -13,6 +13,7 @@
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
|
||||
/* Registers */
|
||||
#define LPC_INT_CTL 0x00
|
||||
@ -34,6 +35,7 @@ struct pch_lpc {
|
||||
u32 saved_reg_pol;
|
||||
};
|
||||
|
||||
static struct pch_lpc *pch_lpc_priv;
|
||||
struct fwnode_handle *pch_lpc_handle;
|
||||
|
||||
static void lpc_irq_ack(struct irq_data *d)
|
||||
@ -147,6 +149,26 @@ static int pch_lpc_disabled(struct pch_lpc *priv)
|
||||
(readl(priv->base + LPC_INT_STS) == 0xffffffff);
|
||||
}
|
||||
|
||||
static int pch_lpc_suspend(void)
|
||||
{
|
||||
pch_lpc_priv->saved_reg_ctl = readl(pch_lpc_priv->base + LPC_INT_CTL);
|
||||
pch_lpc_priv->saved_reg_ena = readl(pch_lpc_priv->base + LPC_INT_ENA);
|
||||
pch_lpc_priv->saved_reg_pol = readl(pch_lpc_priv->base + LPC_INT_POL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pch_lpc_resume(void)
|
||||
{
|
||||
writel(pch_lpc_priv->saved_reg_ctl, pch_lpc_priv->base + LPC_INT_CTL);
|
||||
writel(pch_lpc_priv->saved_reg_ena, pch_lpc_priv->base + LPC_INT_ENA);
|
||||
writel(pch_lpc_priv->saved_reg_pol, pch_lpc_priv->base + LPC_INT_POL);
|
||||
}
|
||||
|
||||
static struct syscore_ops pch_lpc_syscore_ops = {
|
||||
.suspend = pch_lpc_suspend,
|
||||
.resume = pch_lpc_resume,
|
||||
};
|
||||
|
||||
int __init pch_lpc_acpi_init(struct irq_domain *parent,
|
||||
struct acpi_madt_lpc_pic *acpi_pchlpc)
|
||||
{
|
||||
@ -191,7 +213,10 @@ int __init pch_lpc_acpi_init(struct irq_domain *parent,
|
||||
parent_irq = irq_create_fwspec_mapping(&fwspec);
|
||||
irq_set_chained_handler_and_data(parent_irq, lpc_irq_dispatch, priv);
|
||||
|
||||
pch_lpc_priv = priv;
|
||||
pch_lpc_handle = irq_handle;
|
||||
register_syscore_ops(&pch_lpc_syscore_ops);
|
||||
|
||||
return 0;
|
||||
|
||||
free_irq_handle:
|
||||
|
@ -15,6 +15,7 @@
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
|
||||
/* Registers */
|
||||
#define PCH_PIC_MASK 0x20
|
||||
@ -42,6 +43,9 @@ struct pch_pic {
|
||||
raw_spinlock_t pic_lock;
|
||||
u32 vec_count;
|
||||
u32 gsi_base;
|
||||
u32 saved_vec_en[PIC_REG_COUNT];
|
||||
u32 saved_vec_pol[PIC_REG_COUNT];
|
||||
u32 saved_vec_edge[PIC_REG_COUNT];
|
||||
};
|
||||
|
||||
static struct pch_pic *pch_pic_priv[MAX_IO_PICS];
|
||||
@ -145,6 +149,7 @@ static struct irq_chip pch_pic_irq_chip = {
|
||||
.irq_ack = pch_pic_ack_irq,
|
||||
.irq_set_affinity = irq_chip_set_affinity_parent,
|
||||
.irq_set_type = pch_pic_set_type,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||
};
|
||||
|
||||
static int pch_pic_domain_translate(struct irq_domain *d,
|
||||
@ -155,14 +160,20 @@ static int pch_pic_domain_translate(struct irq_domain *d,
|
||||
struct pch_pic *priv = d->host_data;
|
||||
struct device_node *of_node = to_of_node(fwspec->fwnode);
|
||||
|
||||
if (fwspec->param_count < 1)
|
||||
if (of_node) {
|
||||
if (fwspec->param_count < 2)
|
||||
return -EINVAL;
|
||||
|
||||
if (of_node) {
|
||||
*hwirq = fwspec->param[0] + priv->ht_vec_base;
|
||||
*type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK;
|
||||
} else {
|
||||
if (fwspec->param_count < 1)
|
||||
return -EINVAL;
|
||||
|
||||
*hwirq = fwspec->param[0] - priv->gsi_base;
|
||||
if (fwspec->param_count > 1)
|
||||
*type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK;
|
||||
else
|
||||
*type = IRQ_TYPE_NONE;
|
||||
}
|
||||
|
||||
@ -228,6 +239,46 @@ static void pch_pic_reset(struct pch_pic *priv)
|
||||
}
|
||||
}
|
||||
|
||||
static int pch_pic_suspend(void)
|
||||
{
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < nr_pics; i++) {
|
||||
for (j = 0; j < PIC_REG_COUNT; j++) {
|
||||
pch_pic_priv[i]->saved_vec_pol[j] =
|
||||
readl(pch_pic_priv[i]->base + PCH_PIC_POL + 4 * j);
|
||||
pch_pic_priv[i]->saved_vec_edge[j] =
|
||||
readl(pch_pic_priv[i]->base + PCH_PIC_EDGE + 4 * j);
|
||||
pch_pic_priv[i]->saved_vec_en[j] =
|
||||
readl(pch_pic_priv[i]->base + PCH_PIC_MASK + 4 * j);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pch_pic_resume(void)
|
||||
{
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < nr_pics; i++) {
|
||||
pch_pic_reset(pch_pic_priv[i]);
|
||||
for (j = 0; j < PIC_REG_COUNT; j++) {
|
||||
writel(pch_pic_priv[i]->saved_vec_pol[j],
|
||||
pch_pic_priv[i]->base + PCH_PIC_POL + 4 * j);
|
||||
writel(pch_pic_priv[i]->saved_vec_edge[j],
|
||||
pch_pic_priv[i]->base + PCH_PIC_EDGE + 4 * j);
|
||||
writel(pch_pic_priv[i]->saved_vec_en[j],
|
||||
pch_pic_priv[i]->base + PCH_PIC_MASK + 4 * j);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static struct syscore_ops pch_pic_syscore_ops = {
|
||||
.suspend = pch_pic_suspend,
|
||||
.resume = pch_pic_resume,
|
||||
};
|
||||
|
||||
static int pch_pic_init(phys_addr_t addr, unsigned long size, int vec_base,
|
||||
struct irq_domain *parent_domain, struct fwnode_handle *domain_handle,
|
||||
u32 gsi_base)
|
||||
@ -260,6 +311,8 @@ static int pch_pic_init(phys_addr_t addr, unsigned long size, int vec_base,
|
||||
pch_pic_handle[nr_pics] = domain_handle;
|
||||
pch_pic_priv[nr_pics++] = priv;
|
||||
|
||||
register_syscore_ops(&pch_pic_syscore_ops);
|
||||
|
||||
return 0;
|
||||
|
||||
iounmap_base:
|
||||
@ -325,8 +378,7 @@ int find_pch_pic(u32 gsi)
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int __init
|
||||
pch_lpc_parse_madt(union acpi_subtable_headers *header,
|
||||
static int __init pch_lpc_parse_madt(union acpi_subtable_headers *header,
|
||||
const unsigned long end)
|
||||
{
|
||||
struct acpi_madt_lpc_pic *pchlpc_entry = (struct acpi_madt_lpc_pic *)header;
|
||||
@ -336,8 +388,12 @@ pch_lpc_parse_madt(union acpi_subtable_headers *header,
|
||||
|
||||
static int __init acpi_cascade_irqdomain_init(void)
|
||||
{
|
||||
acpi_table_parse_madt(ACPI_MADT_TYPE_LPC_PIC,
|
||||
pch_lpc_parse_madt, 0);
|
||||
int r;
|
||||
|
||||
r = acpi_table_parse_madt(ACPI_MADT_TYPE_LPC_PIC, pch_lpc_parse_madt, 0);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -364,7 +420,7 @@ int __init pch_pic_acpi_init(struct irq_domain *parent,
|
||||
}
|
||||
|
||||
if (acpi_pchpic->id == 0)
|
||||
acpi_cascade_irqdomain_init();
|
||||
ret = acpi_cascade_irqdomain_init();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -203,7 +203,7 @@ ls_extirq_of_init(struct device_node *node, struct device_node *parent)
|
||||
if (ret)
|
||||
goto err_parse_map;
|
||||
|
||||
priv->big_endian = of_device_is_big_endian(parent);
|
||||
priv->big_endian = of_device_is_big_endian(node->parent);
|
||||
priv->is_ls1021a_or_ls1043a = of_device_is_compatible(node, "fsl,ls1021a-extirq") ||
|
||||
of_device_is_compatible(node, "fsl,ls1043a-extirq");
|
||||
raw_spin_lock_init(&priv->lock);
|
||||
|
@ -494,7 +494,7 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
|
||||
map = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin;
|
||||
|
||||
/*
|
||||
* If adding support for more per-cpu interrupts, keep the the
|
||||
* If adding support for more per-cpu interrupts, keep the
|
||||
* array in gic_all_vpes_irq_cpu_online() in sync.
|
||||
*/
|
||||
switch (intr) {
|
||||
|
@ -15,14 +15,41 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
|
||||
#define CIRQ_ACK 0x40
|
||||
#define CIRQ_MASK_SET 0xc0
|
||||
#define CIRQ_MASK_CLR 0x100
|
||||
#define CIRQ_SENS_SET 0x180
|
||||
#define CIRQ_SENS_CLR 0x1c0
|
||||
#define CIRQ_POL_SET 0x240
|
||||
#define CIRQ_POL_CLR 0x280
|
||||
#define CIRQ_CONTROL 0x300
|
||||
enum mtk_cirq_regoffs_index {
|
||||
CIRQ_STA,
|
||||
CIRQ_ACK,
|
||||
CIRQ_MASK_SET,
|
||||
CIRQ_MASK_CLR,
|
||||
CIRQ_SENS_SET,
|
||||
CIRQ_SENS_CLR,
|
||||
CIRQ_POL_SET,
|
||||
CIRQ_POL_CLR,
|
||||
CIRQ_CONTROL
|
||||
};
|
||||
|
||||
static const u32 mtk_cirq_regoffs_v1[] = {
|
||||
[CIRQ_STA] = 0x0,
|
||||
[CIRQ_ACK] = 0x40,
|
||||
[CIRQ_MASK_SET] = 0xc0,
|
||||
[CIRQ_MASK_CLR] = 0x100,
|
||||
[CIRQ_SENS_SET] = 0x180,
|
||||
[CIRQ_SENS_CLR] = 0x1c0,
|
||||
[CIRQ_POL_SET] = 0x240,
|
||||
[CIRQ_POL_CLR] = 0x280,
|
||||
[CIRQ_CONTROL] = 0x300,
|
||||
};
|
||||
|
||||
static const u32 mtk_cirq_regoffs_v2[] = {
|
||||
[CIRQ_STA] = 0x0,
|
||||
[CIRQ_ACK] = 0x80,
|
||||
[CIRQ_MASK_SET] = 0x180,
|
||||
[CIRQ_MASK_CLR] = 0x200,
|
||||
[CIRQ_SENS_SET] = 0x300,
|
||||
[CIRQ_SENS_CLR] = 0x380,
|
||||
[CIRQ_POL_SET] = 0x480,
|
||||
[CIRQ_POL_CLR] = 0x500,
|
||||
[CIRQ_CONTROL] = 0x600,
|
||||
};
|
||||
|
||||
#define CIRQ_EN 0x1
|
||||
#define CIRQ_EDGE 0x2
|
||||
@ -32,18 +59,32 @@ struct mtk_cirq_chip_data {
|
||||
void __iomem *base;
|
||||
unsigned int ext_irq_start;
|
||||
unsigned int ext_irq_end;
|
||||
const u32 *offsets;
|
||||
struct irq_domain *domain;
|
||||
};
|
||||
|
||||
static struct mtk_cirq_chip_data *cirq_data;
|
||||
|
||||
static void mtk_cirq_write_mask(struct irq_data *data, unsigned int offset)
|
||||
static void __iomem *mtk_cirq_reg(struct mtk_cirq_chip_data *chip_data,
|
||||
enum mtk_cirq_regoffs_index idx)
|
||||
{
|
||||
return chip_data->base + chip_data->offsets[idx];
|
||||
}
|
||||
|
||||
static void __iomem *mtk_cirq_irq_reg(struct mtk_cirq_chip_data *chip_data,
|
||||
enum mtk_cirq_regoffs_index idx,
|
||||
unsigned int cirq_num)
|
||||
{
|
||||
return mtk_cirq_reg(chip_data, idx) + (cirq_num / 32) * 4;
|
||||
}
|
||||
|
||||
static void mtk_cirq_write_mask(struct irq_data *data, enum mtk_cirq_regoffs_index idx)
|
||||
{
|
||||
struct mtk_cirq_chip_data *chip_data = data->chip_data;
|
||||
unsigned int cirq_num = data->hwirq;
|
||||
u32 mask = 1 << (cirq_num % 32);
|
||||
|
||||
writel_relaxed(mask, chip_data->base + offset + (cirq_num / 32) * 4);
|
||||
writel_relaxed(mask, mtk_cirq_irq_reg(chip_data, idx, cirq_num));
|
||||
}
|
||||
|
||||
static void mtk_cirq_mask(struct irq_data *data)
|
||||
@ -160,6 +201,7 @@ static const struct irq_domain_ops cirq_domain_ops = {
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int mtk_cirq_suspend(void)
|
||||
{
|
||||
void __iomem *reg;
|
||||
u32 value, mask;
|
||||
unsigned int irq, hwirq_num;
|
||||
bool pending, masked;
|
||||
@ -200,31 +242,34 @@ static int mtk_cirq_suspend(void)
|
||||
continue;
|
||||
}
|
||||
|
||||
reg = mtk_cirq_irq_reg(cirq_data, CIRQ_ACK, i);
|
||||
mask = 1 << (i % 32);
|
||||
writel_relaxed(mask, cirq_data->base + CIRQ_ACK + (i / 32) * 4);
|
||||
writel_relaxed(mask, reg);
|
||||
}
|
||||
|
||||
/* set edge_only mode, record edge-triggerd interrupts */
|
||||
/* enable cirq */
|
||||
value = readl_relaxed(cirq_data->base + CIRQ_CONTROL);
|
||||
reg = mtk_cirq_reg(cirq_data, CIRQ_CONTROL);
|
||||
value = readl_relaxed(reg);
|
||||
value |= (CIRQ_EDGE | CIRQ_EN);
|
||||
writel_relaxed(value, cirq_data->base + CIRQ_CONTROL);
|
||||
writel_relaxed(value, reg);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mtk_cirq_resume(void)
|
||||
{
|
||||
void __iomem *reg = mtk_cirq_reg(cirq_data, CIRQ_CONTROL);
|
||||
u32 value;
|
||||
|
||||
/* flush recorded interrupts, will send signals to parent controller */
|
||||
value = readl_relaxed(cirq_data->base + CIRQ_CONTROL);
|
||||
writel_relaxed(value | CIRQ_FLUSH, cirq_data->base + CIRQ_CONTROL);
|
||||
value = readl_relaxed(reg);
|
||||
writel_relaxed(value | CIRQ_FLUSH, reg);
|
||||
|
||||
/* disable cirq */
|
||||
value = readl_relaxed(cirq_data->base + CIRQ_CONTROL);
|
||||
value = readl_relaxed(reg);
|
||||
value &= ~(CIRQ_EDGE | CIRQ_EN);
|
||||
writel_relaxed(value, cirq_data->base + CIRQ_CONTROL);
|
||||
writel_relaxed(value, reg);
|
||||
}
|
||||
|
||||
static struct syscore_ops mtk_cirq_syscore_ops = {
|
||||
@ -240,10 +285,19 @@ static void mtk_cirq_syscore_init(void)
|
||||
static inline void mtk_cirq_syscore_init(void) {}
|
||||
#endif
|
||||
|
||||
static const struct of_device_id mtk_cirq_of_match[] = {
|
||||
{ .compatible = "mediatek,mt2701-cirq", .data = &mtk_cirq_regoffs_v1 },
|
||||
{ .compatible = "mediatek,mt8135-cirq", .data = &mtk_cirq_regoffs_v1 },
|
||||
{ .compatible = "mediatek,mt8173-cirq", .data = &mtk_cirq_regoffs_v1 },
|
||||
{ .compatible = "mediatek,mt8192-cirq", .data = &mtk_cirq_regoffs_v2 },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
|
||||
static int __init mtk_cirq_of_init(struct device_node *node,
|
||||
struct device_node *parent)
|
||||
{
|
||||
struct irq_domain *domain, *domain_parent;
|
||||
const struct of_device_id *match;
|
||||
unsigned int irq_num;
|
||||
int ret;
|
||||
|
||||
@ -274,6 +328,13 @@ static int __init mtk_cirq_of_init(struct device_node *node,
|
||||
if (ret)
|
||||
goto out_unmap;
|
||||
|
||||
match = of_match_node(mtk_cirq_of_match, node);
|
||||
if (!match) {
|
||||
ret = -ENODEV;
|
||||
goto out_unmap;
|
||||
}
|
||||
cirq_data->offsets = match->data;
|
||||
|
||||
irq_num = cirq_data->ext_irq_end - cirq_data->ext_irq_start + 1;
|
||||
domain = irq_domain_add_hierarchy(domain_parent, 0,
|
||||
irq_num, node,
|
||||
|
@ -151,9 +151,9 @@ static int
|
||||
mvebu_icu_irq_domain_translate(struct irq_domain *d, struct irq_fwspec *fwspec,
|
||||
unsigned long *hwirq, unsigned int *type)
|
||||
{
|
||||
struct mvebu_icu_msi_data *msi_data = platform_msi_get_host_data(d);
|
||||
struct mvebu_icu *icu = platform_msi_get_host_data(d);
|
||||
unsigned int param_count = static_branch_unlikely(&legacy_bindings) ? 3 : 2;
|
||||
struct mvebu_icu_msi_data *msi_data = platform_msi_get_host_data(d);
|
||||
struct mvebu_icu *icu = msi_data->icu;
|
||||
|
||||
/* Check the count of the parameters in dt */
|
||||
if (WARN_ON(fwspec->param_count != param_count)) {
|
||||
|
@ -187,7 +187,8 @@ static struct irq_chip plic_edge_chip = {
|
||||
.irq_set_affinity = plic_set_affinity,
|
||||
#endif
|
||||
.irq_set_type = plic_irq_set_type,
|
||||
.flags = IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
};
|
||||
|
||||
static struct irq_chip plic_chip = {
|
||||
@ -201,7 +202,8 @@ static struct irq_chip plic_chip = {
|
||||
.irq_set_affinity = plic_set_affinity,
|
||||
#endif
|
||||
.irq_set_type = plic_irq_set_type,
|
||||
.flags = IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
};
|
||||
|
||||
static int plic_irq_set_type(struct irq_data *d, unsigned int type)
|
||||
|
@ -65,8 +65,7 @@ static int sl28cpld_intc_probe(struct platform_device *pdev)
|
||||
irqchip->chip.num_irqs = ARRAY_SIZE(sl28cpld_irqs);
|
||||
irqchip->chip.num_regs = 1;
|
||||
irqchip->chip.status_base = base + INTC_IP;
|
||||
irqchip->chip.mask_base = base + INTC_IE;
|
||||
irqchip->chip.mask_invert = true;
|
||||
irqchip->chip.unmask_base = base + INTC_IE;
|
||||
irqchip->chip.ack_base = base + INTC_IP;
|
||||
|
||||
return devm_regmap_add_irq_chip_fwnode(dev, dev_fwnode(dev),
|
||||
|
@ -153,18 +153,13 @@ static int st_irq_syscfg_enable(struct platform_device *pdev)
|
||||
static int st_irq_syscfg_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
const struct of_device_id *match;
|
||||
struct st_irq_syscfg *ddata;
|
||||
|
||||
ddata = devm_kzalloc(&pdev->dev, sizeof(*ddata), GFP_KERNEL);
|
||||
if (!ddata)
|
||||
return -ENOMEM;
|
||||
|
||||
match = of_match_device(st_irq_syscfg_match, &pdev->dev);
|
||||
if (!match)
|
||||
return -ENODEV;
|
||||
|
||||
ddata->syscfg = (unsigned int)match->data;
|
||||
ddata->syscfg = (unsigned int) device_get_match_data(&pdev->dev);
|
||||
|
||||
ddata->regmap = syscon_regmap_lookup_by_phandle(np, "st,syscfg");
|
||||
if (IS_ERR(ddata->regmap)) {
|
||||
|
@ -168,7 +168,7 @@ static void ti_sci_inta_irq_handler(struct irq_desc *desc)
|
||||
/**
|
||||
* ti_sci_inta_xlate_irq() - Translate hwirq to parent's hwirq.
|
||||
* @inta: IRQ domain corresponding to Interrupt Aggregator
|
||||
* @irq: Hardware irq corresponding to the above irq domain
|
||||
* @vint_id: Hardware irq corresponding to the above irq domain
|
||||
*
|
||||
* Return parent irq number if translation is available else -ENOENT.
|
||||
*/
|
||||
|
@ -146,6 +146,7 @@ static int __init wpcm450_aic_of_init(struct device_node *node,
|
||||
aic->regs = of_iomap(node, 0);
|
||||
if (!aic->regs) {
|
||||
pr_err("Failed to map WPCM450 AIC registers\n");
|
||||
kfree(aic);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
@ -223,7 +223,7 @@ config BCM_FLEXRM_MBOX
|
||||
tristate "Broadcom FlexRM Mailbox"
|
||||
depends on ARM64
|
||||
depends on ARCH_BCM_IPROC || COMPILE_TEST
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
default m if ARCH_BCM_IPROC
|
||||
help
|
||||
Mailbox implementation of the Broadcom FlexRM ring manager,
|
||||
|
@ -51,11 +51,6 @@ config PCI_MSI
|
||||
|
||||
If you don't know what to do here, say Y.
|
||||
|
||||
config PCI_MSI_IRQ_DOMAIN
|
||||
def_bool y
|
||||
depends on PCI_MSI
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
|
||||
config PCI_MSI_ARCH_FALLBACKS
|
||||
bool
|
||||
|
||||
@ -192,7 +187,7 @@ config PCI_LABEL
|
||||
|
||||
config PCI_HYPERV
|
||||
tristate "Hyper-V PCI Frontend"
|
||||
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
|
||||
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && SYSFS
|
||||
select PCI_HYPERV_INTERFACE
|
||||
help
|
||||
The PCI device frontend driver allows the kernel to import arbitrary
|
||||
|
@ -19,7 +19,7 @@ config PCI_AARDVARK
|
||||
tristate "Aardvark PCIe controller"
|
||||
depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCI_BRIDGE_EMUL
|
||||
help
|
||||
Add support for Aardvark 64bit PCIe Host Controller. This
|
||||
@ -29,7 +29,7 @@ config PCI_AARDVARK
|
||||
config PCIE_XILINX_NWL
|
||||
bool "NWL PCIe Core"
|
||||
depends on ARCH_ZYNQMP || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Say 'Y' here if you want kernel support for Xilinx
|
||||
NWL PCIe controller. The controller can act as Root Port
|
||||
@ -53,7 +53,7 @@ config PCI_IXP4XX
|
||||
config PCI_TEGRA
|
||||
bool "NVIDIA Tegra PCIe controller"
|
||||
depends on ARCH_TEGRA || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Say Y here if you want support for the PCIe host controller found
|
||||
on NVIDIA Tegra SoCs.
|
||||
@ -70,7 +70,7 @@ config PCI_RCAR_GEN2
|
||||
config PCIE_RCAR_HOST
|
||||
bool "Renesas R-Car PCIe host controller"
|
||||
depends on ARCH_RENESAS || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Say Y here if you want PCIe controller support on R-Car SoCs in host
|
||||
mode.
|
||||
@ -99,7 +99,7 @@ config PCI_HOST_GENERIC
|
||||
config PCIE_XILINX
|
||||
bool "Xilinx AXI PCIe host bridge support"
|
||||
depends on OF || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
|
||||
Host Bridge driver.
|
||||
@ -124,7 +124,7 @@ config PCI_XGENE
|
||||
config PCI_XGENE_MSI
|
||||
bool "X-Gene v1 PCIe MSI feature"
|
||||
depends on PCI_XGENE
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
default y
|
||||
help
|
||||
Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC.
|
||||
@ -170,7 +170,7 @@ config PCIE_IPROC_BCMA
|
||||
config PCIE_IPROC_MSI
|
||||
bool "Broadcom iProc PCIe MSI support"
|
||||
depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
default ARCH_BCM_IPROC
|
||||
help
|
||||
Say Y here if you want to enable MSI support for Broadcom's iProc
|
||||
@ -186,7 +186,7 @@ config PCIE_ALTERA
|
||||
config PCIE_ALTERA_MSI
|
||||
tristate "Altera PCIe MSI feature"
|
||||
depends on PCIE_ALTERA
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Say Y here if you want PCIe MSI support for the Altera FPGA.
|
||||
This MSI driver supports Altera MSI to GIC controller IP.
|
||||
@ -215,7 +215,7 @@ config PCIE_ROCKCHIP_HOST
|
||||
tristate "Rockchip PCIe host controller"
|
||||
depends on ARCH_ROCKCHIP || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select MFD_SYSCON
|
||||
select PCIE_ROCKCHIP
|
||||
help
|
||||
@ -239,7 +239,7 @@ config PCIE_MEDIATEK
|
||||
tristate "MediaTek PCIe controller"
|
||||
depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Say Y here if you want to enable PCIe controller support on
|
||||
MediaTek SoCs.
|
||||
@ -247,7 +247,7 @@ config PCIE_MEDIATEK
|
||||
config PCIE_MEDIATEK_GEN3
|
||||
tristate "MediaTek Gen3 PCIe controller"
|
||||
depends on ARCH_MEDIATEK || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
help
|
||||
Adds support for PCIe Gen3 MAC controller for MediaTek SoCs.
|
||||
This PCIe controller is compatible with Gen3, Gen2 and Gen1 speed,
|
||||
@ -277,7 +277,7 @@ config PCIE_BRCMSTB
|
||||
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCMBCA || \
|
||||
BMIPS_GENERIC || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
default ARCH_BRCMSTB || BMIPS_GENERIC
|
||||
help
|
||||
Say Y here to enable PCIe host controller support for
|
||||
@ -285,7 +285,7 @@ config PCIE_BRCMSTB
|
||||
|
||||
config PCI_HYPERV_INTERFACE
|
||||
tristate "Hyper-V PCI Interface"
|
||||
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN
|
||||
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI
|
||||
help
|
||||
The Hyper-V PCI Interface is a helper driver allows other drivers to
|
||||
have a common interface with the Hyper-V PCI frontend driver.
|
||||
@ -303,8 +303,6 @@ config PCI_LOONGSON
|
||||
config PCIE_MICROCHIP_HOST
|
||||
bool "Microchip AXI PCIe host bridge support"
|
||||
depends on PCI_MSI && OF
|
||||
select PCI_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
Say Y here if you want kernel to support the Microchip AXI PCIe
|
||||
@ -326,7 +324,7 @@ config PCIE_APPLE
|
||||
tristate "Apple PCIe controller"
|
||||
depends on ARCH_APPLE || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
Say Y here if you want to enable PCIe controller support on Apple
|
||||
|
@ -21,7 +21,7 @@ config PCI_DRA7XX_HOST
|
||||
tristate "TI DRA7xx PCIe controller Host Mode"
|
||||
depends on SOC_DRA7XX || COMPILE_TEST
|
||||
depends on OF && HAS_IOMEM && TI_PIPE3
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCI_DRA7XX
|
||||
default y if SOC_DRA7XX
|
||||
@ -53,7 +53,7 @@ config PCIE_DW_PLAT
|
||||
|
||||
config PCIE_DW_PLAT_HOST
|
||||
bool "Platform bus based DesignWare PCIe Controller - Host mode"
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCIE_DW_PLAT
|
||||
help
|
||||
@ -67,7 +67,7 @@ config PCIE_DW_PLAT_HOST
|
||||
|
||||
config PCIE_DW_PLAT_EP
|
||||
bool "Platform bus based DesignWare PCIe Controller - Endpoint mode"
|
||||
depends on PCI && PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI && PCI_MSI
|
||||
depends on PCI_ENDPOINT
|
||||
select PCIE_DW_EP
|
||||
select PCIE_DW_PLAT
|
||||
@ -83,7 +83,7 @@ config PCIE_DW_PLAT_EP
|
||||
config PCI_EXYNOS
|
||||
tristate "Samsung Exynos PCIe controller"
|
||||
depends on ARCH_EXYNOS || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Enables support for the PCIe controller in the Samsung Exynos SoCs
|
||||
@ -94,13 +94,13 @@ config PCI_EXYNOS
|
||||
config PCI_IMX6
|
||||
bool "Freescale i.MX6/7/8 PCIe controller"
|
||||
depends on ARCH_MXC || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
|
||||
config PCIE_SPEAR13XX
|
||||
bool "STMicroelectronics SPEAr PCIe controller"
|
||||
depends on ARCH_SPEAR13XX || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want PCIe support on SPEAr13XX SoCs.
|
||||
@ -111,7 +111,7 @@ config PCI_KEYSTONE
|
||||
config PCI_KEYSTONE_HOST
|
||||
bool "PCI Keystone Host Mode"
|
||||
depends on ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCI_KEYSTONE
|
||||
help
|
||||
@ -135,7 +135,7 @@ config PCI_KEYSTONE_EP
|
||||
config PCI_LAYERSCAPE
|
||||
bool "Freescale Layerscape PCIe controller - Host mode"
|
||||
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select MFD_SYSCON
|
||||
help
|
||||
@ -160,7 +160,7 @@ config PCI_LAYERSCAPE_EP
|
||||
config PCI_HISI
|
||||
depends on OF && (ARM64 || COMPILE_TEST)
|
||||
bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers"
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
@ -170,7 +170,7 @@ config PCI_HISI
|
||||
config PCIE_QCOM
|
||||
bool "Qualcomm PCIe controller"
|
||||
depends on OF && (ARCH_QCOM || COMPILE_TEST)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select CRC8
|
||||
help
|
||||
@ -191,7 +191,7 @@ config PCIE_QCOM_EP
|
||||
config PCIE_ARMADA_8K
|
||||
bool "Marvell Armada-8K PCIe controller"
|
||||
depends on ARCH_MVEBU || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want to enable PCIe controller support on
|
||||
@ -205,7 +205,7 @@ config PCIE_ARTPEC6
|
||||
config PCIE_ARTPEC6_HOST
|
||||
bool "Axis ARTPEC-6 PCIe controller Host Mode"
|
||||
depends on MACH_ARTPEC6 || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCIE_ARTPEC6
|
||||
help
|
||||
@ -226,7 +226,7 @@ config PCIE_ROCKCHIP_DW_HOST
|
||||
bool "Rockchip DesignWare PCIe controller"
|
||||
select PCIE_DW
|
||||
select PCIE_DW_HOST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
depends on ARCH_ROCKCHIP || COMPILE_TEST
|
||||
depends on OF
|
||||
help
|
||||
@ -236,7 +236,7 @@ config PCIE_ROCKCHIP_DW_HOST
|
||||
config PCIE_INTEL_GW
|
||||
bool "Intel Gateway PCIe host controller support"
|
||||
depends on OF && (X86 || COMPILE_TEST)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say 'Y' here to enable PCIe Host controller support on Intel
|
||||
@ -250,7 +250,7 @@ config PCIE_KEEMBAY
|
||||
config PCIE_KEEMBAY_HOST
|
||||
bool "Intel Keem Bay PCIe controller - Host mode"
|
||||
depends on ARCH_KEEMBAY || COMPILE_TEST
|
||||
depends on PCI && PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCIE_KEEMBAY
|
||||
help
|
||||
@ -262,7 +262,7 @@ config PCIE_KEEMBAY_HOST
|
||||
config PCIE_KEEMBAY_EP
|
||||
bool "Intel Keem Bay PCIe controller - Endpoint mode"
|
||||
depends on ARCH_KEEMBAY || COMPILE_TEST
|
||||
depends on PCI && PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
depends on PCI_ENDPOINT
|
||||
select PCIE_DW_EP
|
||||
select PCIE_KEEMBAY
|
||||
@ -275,7 +275,7 @@ config PCIE_KEEMBAY_EP
|
||||
config PCIE_KIRIN
|
||||
depends on OF && (ARM64 || COMPILE_TEST)
|
||||
tristate "HiSilicon Kirin series SoCs PCIe controllers"
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want PCIe controller support
|
||||
@ -284,7 +284,7 @@ config PCIE_KIRIN
|
||||
config PCIE_HISI_STB
|
||||
bool "HiSilicon STB SoCs PCIe controllers"
|
||||
depends on ARCH_HISI || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want PCIe controller support on HiSilicon STB SoCs
|
||||
@ -292,7 +292,7 @@ config PCIE_HISI_STB
|
||||
config PCI_MESON
|
||||
tristate "MESON PCIe controller"
|
||||
default m if ARCH_MESON
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want to enable PCI controller support on Amlogic
|
||||
@ -306,7 +306,7 @@ config PCIE_TEGRA194
|
||||
config PCIE_TEGRA194_HOST
|
||||
tristate "NVIDIA Tegra194 (and later) PCIe controller - Host Mode"
|
||||
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PHY_TEGRA194_P2U
|
||||
select PCIE_TEGRA194
|
||||
@ -336,7 +336,7 @@ config PCIE_TEGRA194_EP
|
||||
config PCIE_VISCONTI_HOST
|
||||
bool "Toshiba Visconti PCIe controllers"
|
||||
depends on ARCH_VISCONTI || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want PCIe controller support on Toshiba Visconti SoC.
|
||||
@ -346,7 +346,7 @@ config PCIE_UNIPHIER
|
||||
bool "Socionext UniPhier PCIe host controllers"
|
||||
depends on ARCH_UNIPHIER || COMPILE_TEST
|
||||
depends on OF && HAS_IOMEM
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Say Y here if you want PCIe host controller support on UniPhier SoCs.
|
||||
@ -365,7 +365,7 @@ config PCIE_UNIPHIER_EP
|
||||
config PCIE_AL
|
||||
bool "Amazon Annapurna Labs PCIe controller"
|
||||
depends on OF && (ARM64 || COMPILE_TEST)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_DW_HOST
|
||||
select PCI_ECAM
|
||||
help
|
||||
@ -377,7 +377,7 @@ config PCIE_AL
|
||||
|
||||
config PCIE_FU740
|
||||
bool "SiFive FU740 PCIe host controller"
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
depends on SOC_SIFIVE || COMPILE_TEST
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
|
@ -8,14 +8,14 @@ config PCIE_MOBIVEIL
|
||||
|
||||
config PCIE_MOBIVEIL_HOST
|
||||
bool
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_MOBIVEIL
|
||||
|
||||
config PCIE_MOBIVEIL_PLAT
|
||||
bool "Mobiveil AXI PCIe controller"
|
||||
depends on ARCH_ZYNQMP || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_MOBIVEIL_HOST
|
||||
help
|
||||
Say Y here if you want to enable support for the Mobiveil AXI PCIe
|
||||
@ -25,7 +25,7 @@ config PCIE_MOBIVEIL_PLAT
|
||||
config PCIE_LAYERSCAPE_GEN4
|
||||
bool "Freescale Layerscape PCIe Gen4 controller"
|
||||
depends on ARCH_LAYERSCAPE || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
depends on PCI_MSI
|
||||
select PCIE_MOBIVEIL_HOST
|
||||
help
|
||||
Say Y here if you want PCIe Gen4 controller support on
|
||||
|
@ -611,20 +611,7 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
|
||||
return cfg->vector;
|
||||
}
|
||||
|
||||
static int hv_msi_prepare(struct irq_domain *domain, struct device *dev,
|
||||
int nvec, msi_alloc_info_t *info)
|
||||
{
|
||||
int ret = pci_msi_prepare(domain, dev, nvec, info);
|
||||
|
||||
/*
|
||||
* By using the interrupt remapper in the hypervisor IOMMU, contiguous
|
||||
* CPU vectors is not needed for multi-MSI
|
||||
*/
|
||||
if (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI)
|
||||
info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS;
|
||||
|
||||
return ret;
|
||||
}
|
||||
#define hv_msi_prepare pci_msi_prepare
|
||||
|
||||
/**
|
||||
* hv_arch_irq_unmask() - "Unmask" the IRQ by setting its current
|
||||
|
@ -2,6 +2,5 @@
|
||||
#
|
||||
# Makefile for the PCI/MSI
|
||||
obj-$(CONFIG_PCI) += pcidev_msi.o
|
||||
obj-$(CONFIG_PCI_MSI) += msi.o
|
||||
obj-$(CONFIG_PCI_MSI_IRQ_DOMAIN) += irqdomain.o
|
||||
obj-$(CONFIG_PCI_MSI) += api.o msi.o irqdomain.o
|
||||
obj-$(CONFIG_PCI_MSI_ARCH_FALLBACKS) += legacy.o
|
||||
|
458
drivers/pci/msi/api.c
Normal file
458
drivers/pci/msi/api.c
Normal file
@ -0,0 +1,458 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* PCI MSI/MSI-X — Exported APIs for device drivers
|
||||
*
|
||||
* Copyright (C) 2003-2004 Intel
|
||||
* Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
|
||||
* Copyright (C) 2016 Christoph Hellwig.
|
||||
* Copyright (C) 2022 Linutronix GmbH
|
||||
*/
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/irq.h>
|
||||
|
||||
#include "msi.h"
|
||||
|
||||
/**
|
||||
* pci_enable_msi() - Enable MSI interrupt mode on device
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Legacy device driver API to enable MSI interrupts mode on device and
|
||||
* allocate a single interrupt vector. On success, the allocated vector
|
||||
* Linux IRQ will be saved at @dev->irq. The driver must invoke
|
||||
* pci_disable_msi() on cleanup.
|
||||
*
|
||||
* NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
|
||||
* pair should, in general, be used instead.
|
||||
*
|
||||
* Return: 0 on success, errno otherwise
|
||||
*/
|
||||
int pci_enable_msi(struct pci_dev *dev)
|
||||
{
|
||||
int rc = __pci_enable_msi_range(dev, 1, 1, NULL);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_enable_msi);
|
||||
|
||||
/**
|
||||
* pci_disable_msi() - Disable MSI interrupt mode on device
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Legacy device driver API to disable MSI interrupt mode on device,
|
||||
* free earlier allocated interrupt vectors, and restore INTx emulation.
|
||||
* The PCI device Linux IRQ (@dev->irq) is restored to its default
|
||||
* pin-assertion IRQ. This is the cleanup pair of pci_enable_msi().
|
||||
*
|
||||
* NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
|
||||
* pair should, in general, be used instead.
|
||||
*/
|
||||
void pci_disable_msi(struct pci_dev *dev)
|
||||
{
|
||||
if (!pci_msi_enabled() || !dev || !dev->msi_enabled)
|
||||
return;
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
pci_msi_shutdown(dev);
|
||||
pci_free_msi_irqs(dev);
|
||||
msi_unlock_descs(&dev->dev);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_disable_msi);
|
||||
|
||||
/**
|
||||
* pci_msix_vec_count() - Get number of MSI-X interrupt vectors on device
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Return: number of MSI-X interrupt vectors available on this device
|
||||
* (i.e., the device's MSI-X capability structure "table size"), -EINVAL
|
||||
* if the device is not MSI-X capable, other errnos otherwise.
|
||||
*/
|
||||
int pci_msix_vec_count(struct pci_dev *dev)
|
||||
{
|
||||
u16 control;
|
||||
|
||||
if (!dev->msix_cap)
|
||||
return -EINVAL;
|
||||
|
||||
pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
|
||||
return msix_table_size(control);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_msix_vec_count);
|
||||
|
||||
/**
|
||||
* pci_enable_msix_range() - Enable MSI-X interrupt mode on device
|
||||
* @dev: the PCI device to operate on
|
||||
* @entries: input/output parameter, array of MSI-X configuration entries
|
||||
* @minvec: minimum required number of MSI-X vectors
|
||||
* @maxvec: maximum desired number of MSI-X vectors
|
||||
*
|
||||
* Legacy device driver API to enable MSI-X interrupt mode on device and
|
||||
* configure its MSI-X capability structure as appropriate. The passed
|
||||
* @entries array must have each of its members "entry" field set to a
|
||||
* desired (valid) MSI-X vector number, where the range of valid MSI-X
|
||||
* vector numbers can be queried through pci_msix_vec_count(). If
|
||||
* successful, the driver must invoke pci_disable_msix() on cleanup.
|
||||
*
|
||||
* NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
|
||||
* pair should, in general, be used instead.
|
||||
*
|
||||
* Return: number of MSI-X vectors allocated (which might be smaller
|
||||
* than @maxvecs), where Linux IRQ numbers for such allocated vectors
|
||||
* are saved back in the @entries array elements' "vector" field. Return
|
||||
* -ENOSPC if less than @minvecs interrupt vectors are available.
|
||||
* Return -EINVAL if one of the passed @entries members "entry" field
|
||||
* was invalid or a duplicate, or if plain MSI interrupts mode was
|
||||
* earlier enabled on device. Return other errnos otherwise.
|
||||
*/
|
||||
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
|
||||
int minvec, int maxvec)
|
||||
{
|
||||
return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL, 0);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_enable_msix_range);
|
||||
|
||||
/**
|
||||
* pci_msix_can_alloc_dyn - Query whether dynamic allocation after enabling
|
||||
* MSI-X is supported
|
||||
*
|
||||
* @dev: PCI device to operate on
|
||||
*
|
||||
* Return: True if supported, false otherwise
|
||||
*/
|
||||
bool pci_msix_can_alloc_dyn(struct pci_dev *dev)
|
||||
{
|
||||
if (!dev->msix_cap)
|
||||
return false;
|
||||
|
||||
return pci_msi_domain_supports(dev, MSI_FLAG_PCI_MSIX_ALLOC_DYN, DENY_LEGACY);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_msix_can_alloc_dyn);
|
||||
|
||||
/**
|
||||
* pci_msix_alloc_irq_at - Allocate an MSI-X interrupt after enabling MSI-X
|
||||
* at a given MSI-X vector index or any free vector index
|
||||
*
|
||||
* @dev: PCI device to operate on
|
||||
* @index: Index to allocate. If @index == MSI_ANY_INDEX this allocates
|
||||
* the next free index in the MSI-X table
|
||||
* @affdesc: Optional pointer to an affinity descriptor structure. NULL otherwise
|
||||
*
|
||||
* Return: A struct msi_map
|
||||
*
|
||||
* On success msi_map::index contains the allocated index (>= 0) and
|
||||
* msi_map::virq contains the allocated Linux interrupt number (> 0).
|
||||
*
|
||||
* On fail msi_map::index contains the error code and msi_map::virq
|
||||
* is set to 0.
|
||||
*/
|
||||
struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
|
||||
const struct irq_affinity_desc *affdesc)
|
||||
{
|
||||
struct msi_map map = { .index = -ENOTSUPP };
|
||||
|
||||
if (!dev->msix_enabled)
|
||||
return map;
|
||||
|
||||
if (!pci_msix_can_alloc_dyn(dev))
|
||||
return map;
|
||||
|
||||
return msi_domain_alloc_irq_at(&dev->dev, MSI_DEFAULT_DOMAIN, index, affdesc, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_msix_alloc_irq_at);
|
||||
|
||||
/**
|
||||
* pci_msix_free_irq - Free an interrupt on a PCI/MSIX interrupt domain
|
||||
* which was allocated via pci_msix_alloc_irq_at()
|
||||
*
|
||||
* @dev: The PCI device to operate on
|
||||
* @map: A struct msi_map describing the interrupt to free
|
||||
* as returned from the allocation function.
|
||||
*/
|
||||
void pci_msix_free_irq(struct pci_dev *dev, struct msi_map map)
|
||||
{
|
||||
if (WARN_ON_ONCE(map.index < 0 || map.virq <= 0))
|
||||
return;
|
||||
if (WARN_ON_ONCE(!pci_msix_can_alloc_dyn(dev)))
|
||||
return;
|
||||
msi_domain_free_irqs_range(&dev->dev, MSI_DEFAULT_DOMAIN, map.index, map.index);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_msix_free_irq);
|
||||
|
||||
/**
|
||||
* pci_disable_msix() - Disable MSI-X interrupt mode on device
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Legacy device driver API to disable MSI-X interrupt mode on device,
|
||||
* free earlier-allocated interrupt vectors, and restore INTx.
|
||||
* The PCI device Linux IRQ (@dev->irq) is restored to its default pin
|
||||
* assertion IRQ. This is the cleanup pair of pci_enable_msix_range().
|
||||
*
|
||||
* NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
|
||||
* pair should, in general, be used instead.
|
||||
*/
|
||||
void pci_disable_msix(struct pci_dev *dev)
|
||||
{
|
||||
if (!pci_msi_enabled() || !dev || !dev->msix_enabled)
|
||||
return;
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
pci_msix_shutdown(dev);
|
||||
pci_free_msi_irqs(dev);
|
||||
msi_unlock_descs(&dev->dev);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_disable_msix);
|
||||
|
||||
/**
|
||||
* pci_alloc_irq_vectors() - Allocate multiple device interrupt vectors
|
||||
* @dev: the PCI device to operate on
|
||||
* @min_vecs: minimum required number of vectors (must be >= 1)
|
||||
* @max_vecs: maximum desired number of vectors
|
||||
* @flags: One or more of:
|
||||
*
|
||||
* * %PCI_IRQ_MSIX Allow trying MSI-X vector allocations
|
||||
* * %PCI_IRQ_MSI Allow trying MSI vector allocations
|
||||
*
|
||||
* * %PCI_IRQ_LEGACY Allow trying legacy INTx interrupts, if
|
||||
* and only if @min_vecs == 1
|
||||
*
|
||||
* * %PCI_IRQ_AFFINITY Auto-manage IRQs affinity by spreading
|
||||
* the vectors around available CPUs
|
||||
*
|
||||
* Allocate up to @max_vecs interrupt vectors on device. MSI-X irq
|
||||
* vector allocation has a higher precedence over plain MSI, which has a
|
||||
* higher precedence over legacy INTx emulation.
|
||||
*
|
||||
* Upon a successful allocation, the caller should use pci_irq_vector()
|
||||
* to get the Linux IRQ number to be passed to request_threaded_irq().
|
||||
* The driver must call pci_free_irq_vectors() on cleanup.
|
||||
*
|
||||
* Return: number of allocated vectors (which might be smaller than
|
||||
* @max_vecs), -ENOSPC if less than @min_vecs interrupt vectors are
|
||||
* available, other errnos otherwise.
|
||||
*/
|
||||
int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags)
|
||||
{
|
||||
return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
|
||||
flags, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_alloc_irq_vectors);
|
||||
|
||||
/**
|
||||
* pci_alloc_irq_vectors_affinity() - Allocate multiple device interrupt
|
||||
* vectors with affinity requirements
|
||||
* @dev: the PCI device to operate on
|
||||
* @min_vecs: minimum required number of vectors (must be >= 1)
|
||||
* @max_vecs: maximum desired number of vectors
|
||||
* @flags: allocation flags, as in pci_alloc_irq_vectors()
|
||||
* @affd: affinity requirements (can be %NULL).
|
||||
*
|
||||
* Same as pci_alloc_irq_vectors(), but with the extra @affd parameter.
|
||||
* Check that function docs, and &struct irq_affinity, for more details.
|
||||
*/
|
||||
int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags,
|
||||
struct irq_affinity *affd)
|
||||
{
|
||||
struct irq_affinity msi_default_affd = {0};
|
||||
int nvecs = -ENOSPC;
|
||||
|
||||
if (flags & PCI_IRQ_AFFINITY) {
|
||||
if (!affd)
|
||||
affd = &msi_default_affd;
|
||||
} else {
|
||||
if (WARN_ON(affd))
|
||||
affd = NULL;
|
||||
}
|
||||
|
||||
if (flags & PCI_IRQ_MSIX) {
|
||||
nvecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
|
||||
affd, flags);
|
||||
if (nvecs > 0)
|
||||
return nvecs;
|
||||
}
|
||||
|
||||
if (flags & PCI_IRQ_MSI) {
|
||||
nvecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
|
||||
if (nvecs > 0)
|
||||
return nvecs;
|
||||
}
|
||||
|
||||
/* use legacy IRQ if allowed */
|
||||
if (flags & PCI_IRQ_LEGACY) {
|
||||
if (min_vecs == 1 && dev->irq) {
|
||||
/*
|
||||
* Invoke the affinity spreading logic to ensure that
|
||||
* the device driver can adjust queue configuration
|
||||
* for the single interrupt case.
|
||||
*/
|
||||
if (affd)
|
||||
irq_create_affinity_masks(1, affd);
|
||||
pci_intx(dev, 1);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
return nvecs;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);
|
||||
|
||||
/**
|
||||
* pci_irq_vector() - Get Linux IRQ number of a device interrupt vector
|
||||
* @dev: the PCI device to operate on
|
||||
* @nr: device-relative interrupt vector index (0-based); has different
|
||||
* meanings, depending on interrupt mode:
|
||||
*
|
||||
* * MSI-X the index in the MSI-X vector table
|
||||
* * MSI the index of the enabled MSI vectors
|
||||
* * INTx must be 0
|
||||
*
|
||||
* Return: the Linux IRQ number, or -EINVAL if @nr is out of range
|
||||
*/
|
||||
int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
|
||||
{
|
||||
unsigned int irq;
|
||||
|
||||
if (!dev->msi_enabled && !dev->msix_enabled)
|
||||
return !nr ? dev->irq : -EINVAL;
|
||||
|
||||
irq = msi_get_virq(&dev->dev, nr);
|
||||
return irq ? irq : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_irq_vector);
|
||||
|
||||
/**
|
||||
* pci_irq_get_affinity() - Get a device interrupt vector affinity
|
||||
* @dev: the PCI device to operate on
|
||||
* @nr: device-relative interrupt vector index (0-based); has different
|
||||
* meanings, depending on interrupt mode:
|
||||
*
|
||||
* * MSI-X the index in the MSI-X vector table
|
||||
* * MSI the index of the enabled MSI vectors
|
||||
* * INTx must be 0
|
||||
*
|
||||
* Return: MSI/MSI-X vector affinity, NULL if @nr is out of range or if
|
||||
* the MSI(-X) vector was allocated without explicit affinity
|
||||
* requirements (e.g., by pci_enable_msi(), pci_enable_msix_range(), or
|
||||
* pci_alloc_irq_vectors() without the %PCI_IRQ_AFFINITY flag). Return a
|
||||
* generic set of CPU IDs representing all possible CPUs available
|
||||
* during system boot if the device is in legacy INTx mode.
|
||||
*/
|
||||
const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
|
||||
{
|
||||
int idx, irq = pci_irq_vector(dev, nr);
|
||||
struct msi_desc *desc;
|
||||
|
||||
if (WARN_ON_ONCE(irq <= 0))
|
||||
return NULL;
|
||||
|
||||
desc = irq_get_msi_desc(irq);
|
||||
/* Non-MSI does not have the information handy */
|
||||
if (!desc)
|
||||
return cpu_possible_mask;
|
||||
|
||||
/* MSI[X] interrupts can be allocated without affinity descriptor */
|
||||
if (!desc->affinity)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* MSI has a mask array in the descriptor.
|
||||
* MSI-X has a single mask.
|
||||
*/
|
||||
idx = dev->msi_enabled ? nr : 0;
|
||||
return &desc->affinity[idx].mask;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_irq_get_affinity);
|
||||
|
||||
/**
|
||||
* pci_ims_alloc_irq - Allocate an interrupt on a PCI/IMS interrupt domain
|
||||
* @dev: The PCI device to operate on
|
||||
* @icookie: Pointer to an IMS implementation specific cookie for this
|
||||
* IMS instance (PASID, queue ID, pointer...).
|
||||
* The cookie content is copied into the MSI descriptor for the
|
||||
* interrupt chip callbacks or domain specific setup functions.
|
||||
* @affdesc: Optional pointer to an interrupt affinity descriptor
|
||||
*
|
||||
* There is no index for IMS allocations as IMS is an implementation
|
||||
* specific storage and does not have any direct associations between
|
||||
* index, which might be a pure software construct, and device
|
||||
* functionality. This association is established by the driver either via
|
||||
* the index - if there is a hardware table - or in case of purely software
|
||||
* managed IMS implementation the association happens via the
|
||||
* irq_write_msi_msg() callback of the implementation specific interrupt
|
||||
* chip, which utilizes the provided @icookie to store the MSI message in
|
||||
* the appropriate place.
|
||||
*
|
||||
* Return: A struct msi_map
|
||||
*
|
||||
* On success msi_map::index contains the allocated index (>= 0) and
|
||||
* msi_map::virq the allocated Linux interrupt number (> 0).
|
||||
*
|
||||
* On fail msi_map::index contains the error code and msi_map::virq
|
||||
* is set to 0.
|
||||
*/
|
||||
struct msi_map pci_ims_alloc_irq(struct pci_dev *dev, union msi_instance_cookie *icookie,
|
||||
const struct irq_affinity_desc *affdesc)
|
||||
{
|
||||
return msi_domain_alloc_irq_at(&dev->dev, MSI_SECONDARY_DOMAIN, MSI_ANY_INDEX,
|
||||
affdesc, icookie);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_ims_alloc_irq);
|
||||
|
||||
/**
|
||||
* pci_ims_free_irq - Allocate an interrupt on a PCI/IMS interrupt domain
|
||||
* which was allocated via pci_ims_alloc_irq()
|
||||
* @dev: The PCI device to operate on
|
||||
* @map: A struct msi_map describing the interrupt to free as
|
||||
* returned from pci_ims_alloc_irq()
|
||||
*/
|
||||
void pci_ims_free_irq(struct pci_dev *dev, struct msi_map map)
|
||||
{
|
||||
if (WARN_ON_ONCE(map.index < 0 || map.virq <= 0))
|
||||
return;
|
||||
msi_domain_free_irqs_range(&dev->dev, MSI_SECONDARY_DOMAIN, map.index, map.index);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_ims_free_irq);
|
||||
|
||||
/**
|
||||
* pci_free_irq_vectors() - Free previously allocated IRQs for a device
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Undo the interrupt vector allocations and possible device MSI/MSI-X
|
||||
* enablement earlier done through pci_alloc_irq_vectors_affinity() or
|
||||
* pci_alloc_irq_vectors().
|
||||
*/
|
||||
void pci_free_irq_vectors(struct pci_dev *dev)
|
||||
{
|
||||
pci_disable_msix(dev);
|
||||
pci_disable_msi(dev);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_free_irq_vectors);
|
||||
|
||||
/**
|
||||
* pci_restore_msi_state() - Restore cached MSI(-X) state on device
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Write the Linux-cached MSI(-X) state back on device. This is
|
||||
* typically useful upon system resume, or after an error-recovery PCI
|
||||
* adapter reset.
|
||||
*/
|
||||
void pci_restore_msi_state(struct pci_dev *dev)
|
||||
{
|
||||
__pci_restore_msi_state(dev);
|
||||
__pci_restore_msix_state(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_restore_msi_state);
|
||||
|
||||
/**
|
||||
* pci_msi_enabled() - Are MSI(-X) interrupts enabled system-wide?
|
||||
*
|
||||
* Return: true if MSI has not been globally disabled through ACPI FADT,
|
||||
* PCI bridge quirks, or the "pci=nomsi" kernel command-line option.
|
||||
*/
|
||||
int pci_msi_enabled(void)
|
||||
{
|
||||
return pci_msi_enable;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_msi_enabled);
|
@ -14,7 +14,7 @@ int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
|
||||
domain = dev_get_msi_domain(&dev->dev);
|
||||
if (domain && irq_domain_is_hierarchy(domain))
|
||||
return msi_domain_alloc_irqs_descs_locked(domain, &dev->dev, nvec);
|
||||
return msi_domain_alloc_irqs_all_locked(&dev->dev, MSI_DEFAULT_DOMAIN, nvec);
|
||||
|
||||
return pci_msi_legacy_setup_msi_irqs(dev, nvec, type);
|
||||
}
|
||||
@ -24,12 +24,13 @@ void pci_msi_teardown_msi_irqs(struct pci_dev *dev)
|
||||
struct irq_domain *domain;
|
||||
|
||||
domain = dev_get_msi_domain(&dev->dev);
|
||||
if (domain && irq_domain_is_hierarchy(domain))
|
||||
msi_domain_free_irqs_descs_locked(domain, &dev->dev);
|
||||
else
|
||||
if (domain && irq_domain_is_hierarchy(domain)) {
|
||||
msi_domain_free_irqs_all_locked(&dev->dev, MSI_DEFAULT_DOMAIN);
|
||||
} else {
|
||||
pci_msi_legacy_teardown_msi_irqs(dev);
|
||||
msi_free_msi_descs(&dev->dev);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_msi_domain_write_msg - Helper to write MSI message to PCI config space
|
||||
@ -63,51 +64,6 @@ static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc)
|
||||
(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27;
|
||||
}
|
||||
|
||||
static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc)
|
||||
{
|
||||
return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_msi_domain_check_cap - Verify that @domain supports the capabilities
|
||||
* for @dev
|
||||
* @domain: The interrupt domain to check
|
||||
* @info: The domain info for verification
|
||||
* @dev: The device to check
|
||||
*
|
||||
* Returns:
|
||||
* 0 if the functionality is supported
|
||||
* 1 if Multi MSI is requested, but the domain does not support it
|
||||
* -ENOTSUPP otherwise
|
||||
*/
|
||||
static int pci_msi_domain_check_cap(struct irq_domain *domain,
|
||||
struct msi_domain_info *info,
|
||||
struct device *dev)
|
||||
{
|
||||
struct msi_desc *desc = msi_first_desc(dev, MSI_DESC_ALL);
|
||||
|
||||
/* Special handling to support __pci_enable_msi_range() */
|
||||
if (pci_msi_desc_is_multi_msi(desc) &&
|
||||
!(info->flags & MSI_FLAG_MULTI_PCI_MSI))
|
||||
return 1;
|
||||
|
||||
if (desc->pci.msi_attrib.is_msix) {
|
||||
if (!(info->flags & MSI_FLAG_PCI_MSIX))
|
||||
return -ENOTSUPP;
|
||||
|
||||
if (info->flags & MSI_FLAG_MSIX_CONTIGUOUS) {
|
||||
unsigned int idx = 0;
|
||||
|
||||
/* Check for gaps in the entry indices */
|
||||
msi_for_each_desc(desc, dev, MSI_DESC_ALL) {
|
||||
if (desc->msi_index != idx++)
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_msi_domain_set_desc(msi_alloc_info_t *arg,
|
||||
struct msi_desc *desc)
|
||||
{
|
||||
@ -117,7 +73,6 @@ static void pci_msi_domain_set_desc(msi_alloc_info_t *arg,
|
||||
|
||||
static struct msi_domain_ops pci_msi_domain_ops_default = {
|
||||
.set_desc = pci_msi_domain_set_desc,
|
||||
.msi_check = pci_msi_domain_check_cap,
|
||||
};
|
||||
|
||||
static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info)
|
||||
@ -129,8 +84,6 @@ static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info)
|
||||
} else {
|
||||
if (ops->set_desc == NULL)
|
||||
ops->set_desc = pci_msi_domain_set_desc;
|
||||
if (ops->msi_check == NULL)
|
||||
ops->msi_check = pci_msi_domain_check_cap;
|
||||
}
|
||||
}
|
||||
|
||||
@ -162,8 +115,6 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
|
||||
struct msi_domain_info *info,
|
||||
struct irq_domain *parent)
|
||||
{
|
||||
struct irq_domain *domain;
|
||||
|
||||
if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE))
|
||||
info->flags &= ~MSI_FLAG_LEVEL_CAPABLE;
|
||||
|
||||
@ -172,22 +123,297 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
|
||||
if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS)
|
||||
pci_msi_domain_update_chip_ops(info);
|
||||
|
||||
/* Let the core code free MSI descriptors when freeing interrupts */
|
||||
info->flags |= MSI_FLAG_FREE_MSI_DESCS;
|
||||
|
||||
info->flags |= MSI_FLAG_ACTIVATE_EARLY | MSI_FLAG_DEV_SYSFS;
|
||||
if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE))
|
||||
info->flags |= MSI_FLAG_MUST_REACTIVATE;
|
||||
|
||||
/* PCI-MSI is oneshot-safe */
|
||||
info->chip->flags |= IRQCHIP_ONESHOT_SAFE;
|
||||
/* Let the core update the bus token */
|
||||
info->bus_token = DOMAIN_BUS_PCI_MSI;
|
||||
|
||||
domain = msi_create_irq_domain(fwnode, info, parent);
|
||||
if (!domain)
|
||||
return NULL;
|
||||
|
||||
irq_domain_update_bus_token(domain, DOMAIN_BUS_PCI_MSI);
|
||||
return domain;
|
||||
return msi_create_irq_domain(fwnode, info, parent);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain);
|
||||
|
||||
/*
|
||||
* Per device MSI[-X] domain functionality
|
||||
*/
|
||||
static void pci_device_domain_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc)
|
||||
{
|
||||
arg->desc = desc;
|
||||
arg->hwirq = desc->msi_index;
|
||||
}
|
||||
|
||||
static void pci_irq_mask_msi(struct irq_data *data)
|
||||
{
|
||||
struct msi_desc *desc = irq_data_get_msi_desc(data);
|
||||
|
||||
pci_msi_mask(desc, BIT(data->irq - desc->irq));
|
||||
}
|
||||
|
||||
static void pci_irq_unmask_msi(struct irq_data *data)
|
||||
{
|
||||
struct msi_desc *desc = irq_data_get_msi_desc(data);
|
||||
|
||||
pci_msi_unmask(desc, BIT(data->irq - desc->irq));
|
||||
}
|
||||
|
||||
#ifdef CONFIG_GENERIC_IRQ_RESERVATION_MODE
|
||||
# define MSI_REACTIVATE MSI_FLAG_MUST_REACTIVATE
|
||||
#else
|
||||
# define MSI_REACTIVATE 0
|
||||
#endif
|
||||
|
||||
#define MSI_COMMON_FLAGS (MSI_FLAG_FREE_MSI_DESCS | \
|
||||
MSI_FLAG_ACTIVATE_EARLY | \
|
||||
MSI_FLAG_DEV_SYSFS | \
|
||||
MSI_REACTIVATE)
|
||||
|
||||
static const struct msi_domain_template pci_msi_template = {
|
||||
.chip = {
|
||||
.name = "PCI-MSI",
|
||||
.irq_mask = pci_irq_mask_msi,
|
||||
.irq_unmask = pci_irq_unmask_msi,
|
||||
.irq_write_msi_msg = pci_msi_domain_write_msg,
|
||||
.flags = IRQCHIP_ONESHOT_SAFE,
|
||||
},
|
||||
|
||||
.ops = {
|
||||
.set_desc = pci_device_domain_set_desc,
|
||||
},
|
||||
|
||||
.info = {
|
||||
.flags = MSI_COMMON_FLAGS | MSI_FLAG_MULTI_PCI_MSI,
|
||||
.bus_token = DOMAIN_BUS_PCI_DEVICE_MSI,
|
||||
},
|
||||
};
|
||||
|
||||
static void pci_irq_mask_msix(struct irq_data *data)
|
||||
{
|
||||
pci_msix_mask(irq_data_get_msi_desc(data));
|
||||
}
|
||||
|
||||
static void pci_irq_unmask_msix(struct irq_data *data)
|
||||
{
|
||||
pci_msix_unmask(irq_data_get_msi_desc(data));
|
||||
}
|
||||
|
||||
static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
|
||||
struct msi_desc *desc)
|
||||
{
|
||||
/* Don't fiddle with preallocated MSI descriptors */
|
||||
if (!desc->pci.mask_base)
|
||||
msix_prepare_msi_desc(to_pci_dev(desc->dev), desc);
|
||||
}
|
||||
|
||||
static const struct msi_domain_template pci_msix_template = {
|
||||
.chip = {
|
||||
.name = "PCI-MSIX",
|
||||
.irq_mask = pci_irq_mask_msix,
|
||||
.irq_unmask = pci_irq_unmask_msix,
|
||||
.irq_write_msi_msg = pci_msi_domain_write_msg,
|
||||
.flags = IRQCHIP_ONESHOT_SAFE,
|
||||
},
|
||||
|
||||
.ops = {
|
||||
.prepare_desc = pci_msix_prepare_desc,
|
||||
.set_desc = pci_device_domain_set_desc,
|
||||
},
|
||||
|
||||
.info = {
|
||||
.flags = MSI_COMMON_FLAGS | MSI_FLAG_PCI_MSIX |
|
||||
MSI_FLAG_PCI_MSIX_ALLOC_DYN,
|
||||
.bus_token = DOMAIN_BUS_PCI_DEVICE_MSIX,
|
||||
},
|
||||
};
|
||||
|
||||
static bool pci_match_device_domain(struct pci_dev *pdev, enum irq_domain_bus_token bus_token)
|
||||
{
|
||||
return msi_match_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN, bus_token);
|
||||
}
|
||||
|
||||
static bool pci_create_device_domain(struct pci_dev *pdev, const struct msi_domain_template *tmpl,
|
||||
unsigned int hwsize)
|
||||
{
|
||||
struct irq_domain *domain = dev_get_msi_domain(&pdev->dev);
|
||||
|
||||
if (!domain || !irq_domain_is_msi_parent(domain))
|
||||
return true;
|
||||
|
||||
return msi_create_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN, tmpl,
|
||||
hwsize, NULL, NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_setup_msi_device_domain - Setup a device MSI interrupt domain
|
||||
* @pdev: The PCI device to create the domain on
|
||||
*
|
||||
* Return:
|
||||
* True when:
|
||||
* - The device does not have a MSI parent irq domain associated,
|
||||
* which keeps the legacy architecture specific and the global
|
||||
* PCI/MSI domain models working
|
||||
* - The MSI domain exists already
|
||||
* - The MSI domain was successfully allocated
|
||||
* False when:
|
||||
* - MSI-X is enabled
|
||||
* - The domain creation fails.
|
||||
*
|
||||
* The created MSI domain is preserved until:
|
||||
* - The device is removed
|
||||
* - MSI is disabled and a MSI-X domain is created
|
||||
*/
|
||||
bool pci_setup_msi_device_domain(struct pci_dev *pdev)
|
||||
{
|
||||
if (WARN_ON_ONCE(pdev->msix_enabled))
|
||||
return false;
|
||||
|
||||
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSI))
|
||||
return true;
|
||||
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSIX))
|
||||
msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN);
|
||||
|
||||
return pci_create_device_domain(pdev, &pci_msi_template, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_setup_msix_device_domain - Setup a device MSI-X interrupt domain
|
||||
* @pdev: The PCI device to create the domain on
|
||||
* @hwsize: The size of the MSI-X vector table
|
||||
*
|
||||
* Return:
|
||||
* True when:
|
||||
* - The device does not have a MSI parent irq domain associated,
|
||||
* which keeps the legacy architecture specific and the global
|
||||
* PCI/MSI domain models working
|
||||
* - The MSI-X domain exists already
|
||||
* - The MSI-X domain was successfully allocated
|
||||
* False when:
|
||||
* - MSI is enabled
|
||||
* - The domain creation fails.
|
||||
*
|
||||
* The created MSI-X domain is preserved until:
|
||||
* - The device is removed
|
||||
* - MSI-X is disabled and a MSI domain is created
|
||||
*/
|
||||
bool pci_setup_msix_device_domain(struct pci_dev *pdev, unsigned int hwsize)
|
||||
{
|
||||
if (WARN_ON_ONCE(pdev->msi_enabled))
|
||||
return false;
|
||||
|
||||
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSIX))
|
||||
return true;
|
||||
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSI))
|
||||
msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN);
|
||||
|
||||
return pci_create_device_domain(pdev, &pci_msix_template, hwsize);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_msi_domain_supports - Check for support of a particular feature flag
|
||||
* @pdev: The PCI device to operate on
|
||||
* @feature_mask: The feature mask to check for (full match)
|
||||
* @mode: If ALLOW_LEGACY this grants the feature when there is no irq domain
|
||||
* associated to the device. If DENY_LEGACY the lack of an irq domain
|
||||
* makes the feature unsupported
|
||||
*/
|
||||
bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
|
||||
enum support_mode mode)
|
||||
{
|
||||
struct msi_domain_info *info;
|
||||
struct irq_domain *domain;
|
||||
unsigned int supported;
|
||||
|
||||
domain = dev_get_msi_domain(&pdev->dev);
|
||||
|
||||
if (!domain || !irq_domain_is_hierarchy(domain))
|
||||
return mode == ALLOW_LEGACY;
|
||||
|
||||
if (!irq_domain_is_msi_parent(domain)) {
|
||||
/*
|
||||
* For "global" PCI/MSI interrupt domains the associated
|
||||
* msi_domain_info::flags is the authoritive source of
|
||||
* information.
|
||||
*/
|
||||
info = domain->host_data;
|
||||
supported = info->flags;
|
||||
} else {
|
||||
/*
|
||||
* For MSI parent domains the supported feature set
|
||||
* is avaliable in the parent ops. This makes checks
|
||||
* possible before actually instantiating the
|
||||
* per device domain because the parent is never
|
||||
* expanding the PCI/MSI functionality.
|
||||
*/
|
||||
supported = domain->msi_parent_ops->supported_flags;
|
||||
}
|
||||
|
||||
return (supported & feature_mask) == feature_mask;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_create_ims_domain - Create a secondary IMS domain for a PCI device
|
||||
* @pdev: The PCI device to operate on
|
||||
* @template: The MSI info template which describes the domain
|
||||
* @hwsize: The size of the hardware entry table or 0 if the domain
|
||||
* is purely software managed
|
||||
* @data: Optional pointer to domain specific data to be stored
|
||||
* in msi_domain_info::data
|
||||
*
|
||||
* Return: True on success, false otherwise
|
||||
*
|
||||
* An IMS domain is expected to have the following constraints:
|
||||
* - The index space is managed by the core code
|
||||
*
|
||||
* - There is no requirement for consecutive index ranges
|
||||
*
|
||||
* - The interrupt chip must provide the following callbacks:
|
||||
* - irq_mask()
|
||||
* - irq_unmask()
|
||||
* - irq_write_msi_msg()
|
||||
*
|
||||
* - The interrupt chip must provide the following optional callbacks
|
||||
* when the irq_mask(), irq_unmask() and irq_write_msi_msg() callbacks
|
||||
* cannot operate directly on hardware, e.g. in the case that the
|
||||
* interrupt message store is in queue memory:
|
||||
* - irq_bus_lock()
|
||||
* - irq_bus_unlock()
|
||||
*
|
||||
* These callbacks are invoked from preemptible task context and are
|
||||
* allowed to sleep. In this case the mandatory callbacks above just
|
||||
* store the information. The irq_bus_unlock() callback is supposed
|
||||
* to make the change effective before returning.
|
||||
*
|
||||
* - Interrupt affinity setting is handled by the underlying parent
|
||||
* interrupt domain and communicated to the IMS domain via
|
||||
* irq_write_msi_msg().
|
||||
*
|
||||
* The domain is automatically destroyed when the PCI device is removed.
|
||||
*/
|
||||
bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template,
|
||||
unsigned int hwsize, void *data)
|
||||
{
|
||||
struct irq_domain *domain = dev_get_msi_domain(&pdev->dev);
|
||||
|
||||
if (!domain || !irq_domain_is_msi_parent(domain))
|
||||
return false;
|
||||
|
||||
if (template->info.bus_token != DOMAIN_BUS_PCI_DEVICE_IMS ||
|
||||
!(template->info.flags & MSI_FLAG_ALLOC_SIMPLE_MSI_DESCS) ||
|
||||
!(template->info.flags & MSI_FLAG_FREE_MSI_DESCS) ||
|
||||
!template->chip.irq_mask || !template->chip.irq_unmask ||
|
||||
!template->chip.irq_write_msi_msg || template->chip.irq_set_affinity)
|
||||
return false;
|
||||
|
||||
return msi_create_device_irq_domain(&pdev->dev, MSI_SECONDARY_DOMAIN, template,
|
||||
hwsize, data, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_create_ims_domain);
|
||||
|
||||
/*
|
||||
* Users of the generic MSI infrastructure expect a device to have a single ID,
|
||||
* so with DMA aliases we have to pick the least-worst compromise. Devices with
|
||||
@ -257,24 +483,3 @@ struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
|
||||
DOMAIN_BUS_PCI_MSI);
|
||||
return dom;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_dev_has_special_msi_domain - Check whether the device is handled by
|
||||
* a non-standard PCI-MSI domain
|
||||
* @pdev: The PCI device to check.
|
||||
*
|
||||
* Returns: True if the device irqdomain or the bus irqdomain is
|
||||
* non-standard PCI/MSI.
|
||||
*/
|
||||
bool pci_dev_has_special_msi_domain(struct pci_dev *pdev)
|
||||
{
|
||||
struct irq_domain *dom = dev_get_msi_domain(&pdev->dev);
|
||||
|
||||
if (!dom)
|
||||
dom = dev_get_msi_domain(&pdev->bus->dev);
|
||||
|
||||
if (!dom)
|
||||
return true;
|
||||
|
||||
return dom->bus_token != DOMAIN_BUS_PCI_MSI;
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -5,24 +5,70 @@
|
||||
|
||||
#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1)
|
||||
|
||||
extern int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
|
||||
extern void pci_msi_teardown_msi_irqs(struct pci_dev *dev);
|
||||
int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
|
||||
void pci_msi_teardown_msi_irqs(struct pci_dev *dev);
|
||||
|
||||
#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS
|
||||
extern int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
|
||||
extern void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev);
|
||||
#else
|
||||
static inline int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
/* Mask/unmask helpers */
|
||||
void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set);
|
||||
|
||||
static inline void pci_msi_mask(struct msi_desc *desc, u32 mask)
|
||||
{
|
||||
WARN_ON_ONCE(1);
|
||||
return -ENODEV;
|
||||
pci_msi_update_mask(desc, 0, mask);
|
||||
}
|
||||
|
||||
static inline void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev)
|
||||
static inline void pci_msi_unmask(struct msi_desc *desc, u32 mask)
|
||||
{
|
||||
WARN_ON_ONCE(1);
|
||||
pci_msi_update_mask(desc, mask, 0);
|
||||
}
|
||||
|
||||
static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc)
|
||||
{
|
||||
return desc->pci.mask_base + desc->msi_index * PCI_MSIX_ENTRY_SIZE;
|
||||
}
|
||||
|
||||
/*
|
||||
* This internal function does not flush PCI writes to the device. All
|
||||
* users must ensure that they read from the device before either assuming
|
||||
* that the device state is up to date, or returning out of this file.
|
||||
* It does not affect the msi_desc::msix_ctrl cache either. Use with care!
|
||||
*/
|
||||
static inline void pci_msix_write_vector_ctrl(struct msi_desc *desc, u32 ctrl)
|
||||
{
|
||||
void __iomem *desc_addr = pci_msix_desc_addr(desc);
|
||||
|
||||
if (desc->pci.msi_attrib.can_mask)
|
||||
writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
|
||||
}
|
||||
|
||||
static inline void pci_msix_mask(struct msi_desc *desc)
|
||||
{
|
||||
desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
|
||||
pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl);
|
||||
/* Flush write to device */
|
||||
readl(desc->pci.mask_base);
|
||||
}
|
||||
|
||||
static inline void pci_msix_unmask(struct msi_desc *desc)
|
||||
{
|
||||
desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
|
||||
pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl);
|
||||
}
|
||||
|
||||
static inline void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask)
|
||||
{
|
||||
if (desc->pci.msi_attrib.is_msix)
|
||||
pci_msix_mask(desc);
|
||||
else
|
||||
pci_msi_mask(desc, mask);
|
||||
}
|
||||
|
||||
static inline void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask)
|
||||
{
|
||||
if (desc->pci.msi_attrib.is_msix)
|
||||
pci_msix_unmask(desc);
|
||||
else
|
||||
pci_msi_unmask(desc, mask);
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to
|
||||
@ -37,3 +83,47 @@ static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc)
|
||||
return 0xffffffff;
|
||||
return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1;
|
||||
}
|
||||
|
||||
void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc);
|
||||
|
||||
/* Subsystem variables */
|
||||
extern int pci_msi_enable;
|
||||
|
||||
/* MSI internal functions invoked from the public APIs */
|
||||
void pci_msi_shutdown(struct pci_dev *dev);
|
||||
void pci_msix_shutdown(struct pci_dev *dev);
|
||||
void pci_free_msi_irqs(struct pci_dev *dev);
|
||||
int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, struct irq_affinity *affd);
|
||||
int __pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, int minvec,
|
||||
int maxvec, struct irq_affinity *affd, int flags);
|
||||
void __pci_restore_msi_state(struct pci_dev *dev);
|
||||
void __pci_restore_msix_state(struct pci_dev *dev);
|
||||
|
||||
/* irq_domain related functionality */
|
||||
|
||||
enum support_mode {
|
||||
ALLOW_LEGACY,
|
||||
DENY_LEGACY,
|
||||
};
|
||||
|
||||
bool pci_msi_domain_supports(struct pci_dev *dev, unsigned int feature_mask, enum support_mode mode);
|
||||
bool pci_setup_msi_device_domain(struct pci_dev *pdev);
|
||||
bool pci_setup_msix_device_domain(struct pci_dev *pdev, unsigned int hwsize);
|
||||
|
||||
/* Legacy (!IRQDOMAIN) fallbacks */
|
||||
|
||||
#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS
|
||||
int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
|
||||
void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev);
|
||||
#else
|
||||
static inline int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
{
|
||||
WARN_ON_ONCE(1);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static inline void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev)
|
||||
{
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
#endif
|
||||
|
@ -842,7 +842,6 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
|
||||
if (!d)
|
||||
d = pci_host_bridge_acpi_msi_domain(bus);
|
||||
|
||||
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
|
||||
/*
|
||||
* If no IRQ domain was found via the OF tree, try looking it up
|
||||
* directly through the fwnode_handle.
|
||||
@ -854,7 +853,6 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
|
||||
d = irq_find_matching_fwnode(fwnode,
|
||||
DOMAIN_BUS_PCI_MSI);
|
||||
}
|
||||
#endif
|
||||
|
||||
return d;
|
||||
}
|
||||
|
@ -93,7 +93,7 @@ config ARM_PMU_ACPI
|
||||
config ARM_SMMU_V3_PMU
|
||||
tristate "ARM SMMUv3 Performance Monitors Extension"
|
||||
depends on (ARM64 && ACPI) || (COMPILE_TEST && 64BIT)
|
||||
depends on GENERIC_MSI_IRQ_DOMAIN
|
||||
depends on GENERIC_MSI_IRQ
|
||||
help
|
||||
Provides support for the ARM SMMUv3 Performance Monitor Counter
|
||||
Groups (PMCG), which provide monitoring of transactions passing
|
||||
|
@ -10,7 +10,6 @@
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/io.h>
|
||||
|
@ -98,6 +98,6 @@ endif # SOC_TI
|
||||
|
||||
config TI_SCI_INTA_MSI_DOMAIN
|
||||
bool
|
||||
select GENERIC_MSI_IRQ_DOMAIN
|
||||
select GENERIC_MSI_IRQ
|
||||
help
|
||||
Driver to enable Interrupt Aggregator specific MSI Domain.
|
||||
|
@ -73,13 +73,13 @@ static int ti_sci_inta_msi_alloc_descs(struct device *dev,
|
||||
for (set = 0; set < res->sets; set++) {
|
||||
for (i = 0; i < res->desc[set].num; i++, count++) {
|
||||
msi_desc.msi_index = res->desc[set].start + i;
|
||||
if (msi_add_msi_desc(dev, &msi_desc))
|
||||
if (msi_insert_msi_desc(dev, &msi_desc))
|
||||
goto fail;
|
||||
}
|
||||
|
||||
for (i = 0; i < res->desc[set].num_sec; i++, count++) {
|
||||
msi_desc.msi_index = res->desc[set].start_sec + i;
|
||||
if (msi_add_msi_desc(dev, &msi_desc))
|
||||
if (msi_insert_msi_desc(dev, &msi_desc))
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
@ -93,13 +93,8 @@ int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev,
|
||||
struct ti_sci_resource *res)
|
||||
{
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct irq_domain *msi_domain;
|
||||
int ret, nvec;
|
||||
|
||||
msi_domain = dev_get_msi_domain(dev);
|
||||
if (!msi_domain)
|
||||
return -EINVAL;
|
||||
|
||||
if (pdev->id < 0)
|
||||
return -ENODEV;
|
||||
|
||||
@ -114,7 +109,8 @@ int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev,
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ret = msi_domain_alloc_irqs_descs_locked(msi_domain, dev, nvec);
|
||||
/* Use alloc ALL as it's unclear whether there are gaps in the indices */
|
||||
ret = msi_domain_alloc_irqs_all_locked(dev, MSI_DEFAULT_DOMAIN, nvec);
|
||||
if (ret)
|
||||
dev_err(dev, "Failed to allocate IRQs %d\n", ret);
|
||||
unlock:
|
||||
|
@ -8,7 +8,6 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/eventfd.h>
|
||||
#include <linux/msi.h>
|
||||
|
||||
#include "linux/fsl/mc.h"
|
||||
#include "vfio_fsl_mc_private.h"
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
|
||||
#ifndef NUM_MSI_ALLOC_SCRATCHPAD_REGS
|
||||
# define NUM_MSI_ALLOC_SCRATCHPAD_REGS 2
|
||||
@ -36,6 +36,6 @@ typedef struct msi_alloc_info {
|
||||
|
||||
#define GENERIC_MSI_DOMAIN_OPS 1
|
||||
|
||||
#endif /* CONFIG_GENERIC_MSI_IRQ_DOMAIN */
|
||||
#endif /* CONFIG_GENERIC_MSI_IRQ */
|
||||
|
||||
#endif
|
||||
|
@ -15,13 +15,15 @@
|
||||
|
||||
#include <linux/clocksource.h>
|
||||
#include <linux/math64.h>
|
||||
#include <asm/mshyperv.h>
|
||||
#include <asm/hyperv-tlfs.h>
|
||||
|
||||
#define HV_MAX_MAX_DELTA_TICKS 0xffffffff
|
||||
#define HV_MIN_DELTA_TICKS 1
|
||||
|
||||
#ifdef CONFIG_HYPERV_TIMER
|
||||
|
||||
#include <asm/hyperv_timer.h>
|
||||
|
||||
/* Routines called by the VMbus driver */
|
||||
extern int hv_stimer_alloc(bool have_percpu_irqs);
|
||||
extern int hv_stimer_cleanup(unsigned int cpu);
|
||||
|
@ -378,10 +378,8 @@ struct dev_links_info {
|
||||
* @data: Pointer to MSI device data
|
||||
*/
|
||||
struct dev_msi_info {
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
struct irq_domain *domain;
|
||||
#endif
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
struct irq_domain *domain;
|
||||
struct msi_device_data *data;
|
||||
#endif
|
||||
};
|
||||
@ -742,7 +740,7 @@ static inline void set_dev_node(struct device *dev, int node)
|
||||
|
||||
static inline struct irq_domain *dev_get_msi_domain(const struct device *dev)
|
||||
{
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
return dev->msi.domain;
|
||||
#else
|
||||
return NULL;
|
||||
@ -751,7 +749,7 @@ static inline struct irq_domain *dev_get_msi_domain(const struct device *dev)
|
||||
|
||||
static inline void dev_set_msi_domain(struct device *dev, struct irq_domain *d)
|
||||
{
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
dev->msi.domain = d;
|
||||
#endif
|
||||
}
|
||||
|
@ -27,7 +27,7 @@ struct gpio_chip;
|
||||
|
||||
union gpio_irq_fwspec {
|
||||
struct irq_fwspec fwspec;
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
msi_alloc_info_t msiinfo;
|
||||
#endif
|
||||
};
|
||||
|
@ -31,6 +31,7 @@
|
||||
#define _LINUX_IRQDOMAIN_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/irqdomain_defs.h>
|
||||
#include <linux/irqhandler.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/mutex.h>
|
||||
@ -45,6 +46,7 @@ struct irq_desc;
|
||||
struct cpumask;
|
||||
struct seq_file;
|
||||
struct irq_affinity_desc;
|
||||
struct msi_parent_ops;
|
||||
|
||||
#define IRQ_DOMAIN_IRQ_SPEC_PARAMS 16
|
||||
|
||||
@ -68,27 +70,6 @@ struct irq_fwspec {
|
||||
void of_phandle_args_to_fwspec(struct device_node *np, const u32 *args,
|
||||
unsigned int count, struct irq_fwspec *fwspec);
|
||||
|
||||
/*
|
||||
* Should several domains have the same device node, but serve
|
||||
* different purposes (for example one domain is for PCI/MSI, and the
|
||||
* other for wired IRQs), they can be distinguished using a
|
||||
* bus-specific token. Most domains are expected to only carry
|
||||
* DOMAIN_BUS_ANY.
|
||||
*/
|
||||
enum irq_domain_bus_token {
|
||||
DOMAIN_BUS_ANY = 0,
|
||||
DOMAIN_BUS_WIRED,
|
||||
DOMAIN_BUS_GENERIC_MSI,
|
||||
DOMAIN_BUS_PCI_MSI,
|
||||
DOMAIN_BUS_PLATFORM_MSI,
|
||||
DOMAIN_BUS_NEXUS,
|
||||
DOMAIN_BUS_IPI,
|
||||
DOMAIN_BUS_FSL_MC_MSI,
|
||||
DOMAIN_BUS_TI_SCI_INTA_MSI,
|
||||
DOMAIN_BUS_WAKEUP,
|
||||
DOMAIN_BUS_VMD_MSI,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct irq_domain_ops - Methods for irq_domain objects
|
||||
* @match: Match an interrupt controller device node to a host, returns
|
||||
@ -139,23 +120,27 @@ struct irq_domain_chip_generic;
|
||||
* struct irq_domain - Hardware interrupt number translation object
|
||||
* @link: Element in global irq_domain list.
|
||||
* @name: Name of interrupt domain
|
||||
* @ops: pointer to irq_domain methods
|
||||
* @host_data: private data pointer for use by owner. Not touched by irq_domain
|
||||
* @ops: Pointer to irq_domain methods
|
||||
* @host_data: Private data pointer for use by owner. Not touched by irq_domain
|
||||
* core code.
|
||||
* @flags: host per irq_domain flags
|
||||
* @flags: Per irq_domain flags
|
||||
* @mapcount: The number of mapped interrupts
|
||||
*
|
||||
* Optional elements
|
||||
* Optional elements:
|
||||
* @fwnode: Pointer to firmware node associated with the irq_domain. Pretty easy
|
||||
* to swap it for the of_node via the irq_domain_get_of_node accessor
|
||||
* @gc: Pointer to a list of generic chips. There is a helper function for
|
||||
* setting up one or more generic chips for interrupt controllers
|
||||
* drivers using the generic chip library which uses this pointer.
|
||||
* @dev: Pointer to a device that the domain represent, and that will be
|
||||
* used for power management purposes.
|
||||
* @dev: Pointer to the device which instantiated the irqdomain
|
||||
* With per device irq domains this is not necessarily the same
|
||||
* as @pm_dev.
|
||||
* @pm_dev: Pointer to a device that can be utilized for power management
|
||||
* purposes related to the irq domain.
|
||||
* @parent: Pointer to parent irq_domain to support hierarchy irq_domains
|
||||
* @msi_parent_ops: Pointer to MSI parent domain methods for per device domain init
|
||||
*
|
||||
* Revmap data, used internally by irq_domain
|
||||
* Revmap data, used internally by the irq domain code:
|
||||
* @revmap_size: Size of the linear map table @revmap[]
|
||||
* @revmap_tree: Radix map tree for hwirqs that don't fit in the linear map
|
||||
* @revmap_mutex: Lock for the revmap
|
||||
@ -174,9 +159,13 @@ struct irq_domain {
|
||||
enum irq_domain_bus_token bus_token;
|
||||
struct irq_domain_chip_generic *gc;
|
||||
struct device *dev;
|
||||
struct device *pm_dev;
|
||||
#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
|
||||
struct irq_domain *parent;
|
||||
#endif
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
const struct msi_parent_ops *msi_parent_ops;
|
||||
#endif
|
||||
|
||||
/* reverse map data. The linear map gets appended to the irq_domain */
|
||||
irq_hw_number_t hwirq_max;
|
||||
@ -206,15 +195,14 @@ enum {
|
||||
/* Irq domain implements MSI remapping */
|
||||
IRQ_DOMAIN_FLAG_MSI_REMAP = (1 << 5),
|
||||
|
||||
/*
|
||||
* Quirk to handle MSI implementations which do not provide
|
||||
* masking. Currently known to affect x86, but partially
|
||||
* handled in core code.
|
||||
*/
|
||||
IRQ_DOMAIN_MSI_NOMASK_QUIRK = (1 << 6),
|
||||
|
||||
/* Irq domain doesn't translate anything */
|
||||
IRQ_DOMAIN_FLAG_NO_MAP = (1 << 7),
|
||||
IRQ_DOMAIN_FLAG_NO_MAP = (1 << 6),
|
||||
|
||||
/* Irq domain is a MSI parent domain */
|
||||
IRQ_DOMAIN_FLAG_MSI_PARENT = (1 << 8),
|
||||
|
||||
/* Irq domain is a MSI device domain */
|
||||
IRQ_DOMAIN_FLAG_MSI_DEVICE = (1 << 9),
|
||||
|
||||
/*
|
||||
* Flags starting from IRQ_DOMAIN_FLAG_NONCORE are reserved
|
||||
@ -233,7 +221,7 @@ static inline void irq_domain_set_pm_device(struct irq_domain *d,
|
||||
struct device *dev)
|
||||
{
|
||||
if (d)
|
||||
d->dev = dev;
|
||||
d->pm_dev = dev;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_IRQ_DOMAIN
|
||||
@ -578,6 +566,16 @@ static inline bool irq_domain_is_msi_remap(struct irq_domain *domain)
|
||||
|
||||
extern bool irq_domain_hierarchical_is_msi_remap(struct irq_domain *domain);
|
||||
|
||||
static inline bool irq_domain_is_msi_parent(struct irq_domain *domain)
|
||||
{
|
||||
return domain->flags & IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
}
|
||||
|
||||
static inline bool irq_domain_is_msi_device(struct irq_domain *domain)
|
||||
{
|
||||
return domain->flags & IRQ_DOMAIN_FLAG_MSI_DEVICE;
|
||||
}
|
||||
|
||||
#else /* CONFIG_IRQ_DOMAIN_HIERARCHY */
|
||||
static inline int irq_domain_alloc_irqs(struct irq_domain *domain,
|
||||
unsigned int nr_irqs, int node, void *arg)
|
||||
@ -623,6 +621,17 @@ irq_domain_hierarchical_is_msi_remap(struct irq_domain *domain)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool irq_domain_is_msi_parent(struct irq_domain *domain)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool irq_domain_is_msi_device(struct irq_domain *domain)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_IRQ_DOMAIN_HIERARCHY */
|
||||
|
||||
#else /* CONFIG_IRQ_DOMAIN */
|
||||
|
31
include/linux/irqdomain_defs.h
Normal file
31
include/linux/irqdomain_defs.h
Normal file
@ -0,0 +1,31 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _LINUX_IRQDOMAIN_DEFS_H
|
||||
#define _LINUX_IRQDOMAIN_DEFS_H
|
||||
|
||||
/*
|
||||
* Should several domains have the same device node, but serve
|
||||
* different purposes (for example one domain is for PCI/MSI, and the
|
||||
* other for wired IRQs), they can be distinguished using a
|
||||
* bus-specific token. Most domains are expected to only carry
|
||||
* DOMAIN_BUS_ANY.
|
||||
*/
|
||||
enum irq_domain_bus_token {
|
||||
DOMAIN_BUS_ANY = 0,
|
||||
DOMAIN_BUS_WIRED,
|
||||
DOMAIN_BUS_GENERIC_MSI,
|
||||
DOMAIN_BUS_PCI_MSI,
|
||||
DOMAIN_BUS_PLATFORM_MSI,
|
||||
DOMAIN_BUS_NEXUS,
|
||||
DOMAIN_BUS_IPI,
|
||||
DOMAIN_BUS_FSL_MC_MSI,
|
||||
DOMAIN_BUS_TI_SCI_INTA_MSI,
|
||||
DOMAIN_BUS_WAKEUP,
|
||||
DOMAIN_BUS_VMD_MSI,
|
||||
DOMAIN_BUS_PCI_DEVICE_MSI,
|
||||
DOMAIN_BUS_PCI_DEVICE_MSIX,
|
||||
DOMAIN_BUS_DMAR,
|
||||
DOMAIN_BUS_AMDVI,
|
||||
DOMAIN_BUS_PCI_DEVICE_IMS,
|
||||
};
|
||||
|
||||
#endif /* _LINUX_IRQDOMAIN_DEFS_H */
|
@ -3,10 +3,10 @@
|
||||
#define _LINUX_IRQRETURN_H
|
||||
|
||||
/**
|
||||
* enum irqreturn
|
||||
* @IRQ_NONE interrupt was not from this device or was not handled
|
||||
* @IRQ_HANDLED interrupt was handled by this device
|
||||
* @IRQ_WAKE_THREAD handler requests to wake the handler thread
|
||||
* enum irqreturn - irqreturn type values
|
||||
* @IRQ_NONE: interrupt was not from this device or was not handled
|
||||
* @IRQ_HANDLED: interrupt was handled by this device
|
||||
* @IRQ_WAKE_THREAD: handler requests to wake the handler thread
|
||||
*/
|
||||
enum irqreturn {
|
||||
IRQ_NONE = (0 << 0),
|
||||
|
@ -13,13 +13,20 @@
|
||||
*
|
||||
* Regular device drivers have no business with any of these functions and
|
||||
* especially storing MSI descriptor pointers in random code is considered
|
||||
* abuse. The only function which is relevant for drivers is msi_get_virq().
|
||||
* abuse.
|
||||
*
|
||||
* Device driver relevant functions are available in <linux/msi_api.h>
|
||||
*/
|
||||
|
||||
#include <linux/irqdomain_defs.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/msi_api.h>
|
||||
#include <linux/xarray.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/bits.h>
|
||||
|
||||
#include <asm/msi.h>
|
||||
|
||||
/* Dummy shadow structures if an architecture does not define them */
|
||||
@ -68,19 +75,18 @@ struct msi_msg {
|
||||
|
||||
extern int pci_msi_ignore_mask;
|
||||
/* Helper functions */
|
||||
struct irq_data;
|
||||
struct msi_desc;
|
||||
struct pci_dev;
|
||||
struct platform_msi_priv_data;
|
||||
struct device_attribute;
|
||||
struct irq_domain;
|
||||
struct irq_affinity_desc;
|
||||
|
||||
void __get_cached_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
void get_cached_msi_msg(unsigned int irq, struct msi_msg *msg);
|
||||
#else
|
||||
static inline void get_cached_msi_msg(unsigned int irq, struct msi_msg *msg)
|
||||
{
|
||||
}
|
||||
static inline void get_cached_msi_msg(unsigned int irq, struct msi_msg *msg) { }
|
||||
#endif
|
||||
|
||||
typedef void (*irq_write_msi_msg_t)(struct msi_desc *desc,
|
||||
@ -120,6 +126,38 @@ struct pci_msi_desc {
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* union msi_domain_cookie - Opaque MSI domain specific data
|
||||
* @value: u64 value store
|
||||
* @ptr: Pointer to domain specific data
|
||||
* @iobase: Domain specific IOmem pointer
|
||||
*
|
||||
* The content of this data is implementation defined and used by the MSI
|
||||
* domain to store domain specific information which is requried for
|
||||
* interrupt chip callbacks.
|
||||
*/
|
||||
union msi_domain_cookie {
|
||||
u64 value;
|
||||
void *ptr;
|
||||
void __iomem *iobase;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct msi_desc_data - Generic MSI descriptor data
|
||||
* @dcookie: Cookie for MSI domain specific data which is required
|
||||
* for irq_chip callbacks
|
||||
* @icookie: Cookie for the MSI interrupt instance provided by
|
||||
* the usage site to the allocation function
|
||||
*
|
||||
* The content of this data is implementation defined, e.g. PCI/IMS
|
||||
* implementations define the meaning of the data. The MSI core ignores
|
||||
* this data completely.
|
||||
*/
|
||||
struct msi_desc_data {
|
||||
union msi_domain_cookie dcookie;
|
||||
union msi_instance_cookie icookie;
|
||||
};
|
||||
|
||||
#define MSI_MAX_INDEX ((unsigned int)USHRT_MAX)
|
||||
|
||||
/**
|
||||
@ -137,6 +175,7 @@ struct pci_msi_desc {
|
||||
*
|
||||
* @msi_index: Index of the msi descriptor
|
||||
* @pci: PCI specific msi descriptor data
|
||||
* @data: Generic MSI descriptor data
|
||||
*/
|
||||
struct msi_desc {
|
||||
/* Shared device/bus type independent data */
|
||||
@ -156,7 +195,10 @@ struct msi_desc {
|
||||
void *write_msi_msg_data;
|
||||
|
||||
u16 msi_index;
|
||||
union {
|
||||
struct pci_msi_desc pci;
|
||||
struct msi_desc_data data;
|
||||
};
|
||||
};
|
||||
|
||||
/*
|
||||
@ -171,33 +213,80 @@ enum msi_desc_filter {
|
||||
MSI_DESC_ASSOCIATED,
|
||||
};
|
||||
|
||||
|
||||
/**
|
||||
* struct msi_dev_domain - The internals of MSI domain info per device
|
||||
* @store: Xarray for storing MSI descriptor pointers
|
||||
* @irqdomain: Pointer to a per device interrupt domain
|
||||
*/
|
||||
struct msi_dev_domain {
|
||||
struct xarray store;
|
||||
struct irq_domain *domain;
|
||||
};
|
||||
|
||||
/**
|
||||
* msi_device_data - MSI per device data
|
||||
* @properties: MSI properties which are interesting to drivers
|
||||
* @platform_data: Platform-MSI specific data
|
||||
* @mutex: Mutex protecting the MSI descriptor store
|
||||
* @__store: Xarray for storing MSI descriptor pointers
|
||||
* @__domains: Internal data for per device MSI domains
|
||||
* @__iter_idx: Index to search the next entry for iterators
|
||||
*/
|
||||
struct msi_device_data {
|
||||
unsigned long properties;
|
||||
struct platform_msi_priv_data *platform_data;
|
||||
struct mutex mutex;
|
||||
struct xarray __store;
|
||||
struct msi_dev_domain __domains[MSI_MAX_DEVICE_IRQDOMAINS];
|
||||
unsigned long __iter_idx;
|
||||
};
|
||||
|
||||
int msi_setup_device_data(struct device *dev);
|
||||
|
||||
unsigned int msi_get_virq(struct device *dev, unsigned int index);
|
||||
void msi_lock_descs(struct device *dev);
|
||||
void msi_unlock_descs(struct device *dev);
|
||||
|
||||
struct msi_desc *msi_first_desc(struct device *dev, enum msi_desc_filter filter);
|
||||
struct msi_desc *msi_next_desc(struct device *dev, enum msi_desc_filter filter);
|
||||
struct msi_desc *msi_domain_first_desc(struct device *dev, unsigned int domid,
|
||||
enum msi_desc_filter filter);
|
||||
|
||||
/**
|
||||
* msi_for_each_desc - Iterate the MSI descriptors
|
||||
* msi_first_desc - Get the first MSI descriptor of the default irqdomain
|
||||
* @dev: Device to operate on
|
||||
* @filter: Descriptor state filter
|
||||
*
|
||||
* Must be called with the MSI descriptor mutex held, i.e. msi_lock_descs()
|
||||
* must be invoked before the call.
|
||||
*
|
||||
* Return: Pointer to the first MSI descriptor matching the search
|
||||
* criteria, NULL if none found.
|
||||
*/
|
||||
static inline struct msi_desc *msi_first_desc(struct device *dev,
|
||||
enum msi_desc_filter filter)
|
||||
{
|
||||
return msi_domain_first_desc(dev, MSI_DEFAULT_DOMAIN, filter);
|
||||
}
|
||||
|
||||
struct msi_desc *msi_next_desc(struct device *dev, unsigned int domid,
|
||||
enum msi_desc_filter filter);
|
||||
|
||||
/**
|
||||
* msi_domain_for_each_desc - Iterate the MSI descriptors in a specific domain
|
||||
*
|
||||
* @desc: struct msi_desc pointer used as iterator
|
||||
* @dev: struct device pointer - device to iterate
|
||||
* @domid: The id of the interrupt domain which should be walked.
|
||||
* @filter: Filter for descriptor selection
|
||||
*
|
||||
* Notes:
|
||||
* - The loop must be protected with a msi_lock_descs()/msi_unlock_descs()
|
||||
* pair.
|
||||
* - It is safe to remove a retrieved MSI descriptor in the loop.
|
||||
*/
|
||||
#define msi_domain_for_each_desc(desc, dev, domid, filter) \
|
||||
for ((desc) = msi_domain_first_desc((dev), (domid), (filter)); (desc); \
|
||||
(desc) = msi_next_desc((dev), (domid), (filter)))
|
||||
|
||||
/**
|
||||
* msi_for_each_desc - Iterate the MSI descriptors in the default irqdomain
|
||||
*
|
||||
* @desc: struct msi_desc pointer used as iterator
|
||||
* @dev: struct device pointer - device to iterate
|
||||
@ -209,8 +298,7 @@ struct msi_desc *msi_next_desc(struct device *dev, enum msi_desc_filter filter);
|
||||
* - It is safe to remove a retrieved MSI descriptor in the loop.
|
||||
*/
|
||||
#define msi_for_each_desc(desc, dev, filter) \
|
||||
for ((desc) = msi_first_desc((dev), (filter)); (desc); \
|
||||
(desc) = msi_next_desc((dev), (filter)))
|
||||
msi_domain_for_each_desc((desc), (dev), MSI_DEFAULT_DOMAIN, (filter))
|
||||
|
||||
#define msi_desc_to_dev(desc) ((desc)->dev)
|
||||
|
||||
@ -237,34 +325,47 @@ static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc);
|
||||
void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg);
|
||||
#else /* CONFIG_PCI_MSI */
|
||||
static inline void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg)
|
||||
int msi_domain_insert_msi_desc(struct device *dev, unsigned int domid,
|
||||
struct msi_desc *init_desc);
|
||||
/**
|
||||
* msi_insert_msi_desc - Allocate and initialize a MSI descriptor in the
|
||||
* default irqdomain and insert it at @init_desc->msi_index
|
||||
* @dev: Pointer to the device for which the descriptor is allocated
|
||||
* @init_desc: Pointer to an MSI descriptor to initialize the new descriptor
|
||||
*
|
||||
* Return: 0 on success or an appropriate failure code.
|
||||
*/
|
||||
static inline int msi_insert_msi_desc(struct device *dev, struct msi_desc *init_desc)
|
||||
{
|
||||
return msi_domain_insert_msi_desc(dev, MSI_DEFAULT_DOMAIN, init_desc);
|
||||
}
|
||||
#endif /* CONFIG_PCI_MSI */
|
||||
|
||||
int msi_add_msi_desc(struct device *dev, struct msi_desc *init_desc);
|
||||
void msi_free_msi_descs_range(struct device *dev, enum msi_desc_filter filter,
|
||||
unsigned int first_index, unsigned int last_index);
|
||||
void msi_domain_free_msi_descs_range(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last);
|
||||
|
||||
/**
|
||||
* msi_free_msi_descs - Free MSI descriptors of a device
|
||||
* msi_free_msi_descs_range - Free a range of MSI descriptors of a device
|
||||
* in the default irqdomain
|
||||
*
|
||||
* @dev: Device for which to free the descriptors
|
||||
* @first: Index to start freeing from (inclusive)
|
||||
* @last: Last index to be freed (inclusive)
|
||||
*/
|
||||
static inline void msi_free_msi_descs_range(struct device *dev, unsigned int first,
|
||||
unsigned int last)
|
||||
{
|
||||
msi_domain_free_msi_descs_range(dev, MSI_DEFAULT_DOMAIN, first, last);
|
||||
}
|
||||
|
||||
/**
|
||||
* msi_free_msi_descs - Free all MSI descriptors of a device in the default irqdomain
|
||||
* @dev: Device to free the descriptors
|
||||
*/
|
||||
static inline void msi_free_msi_descs(struct device *dev)
|
||||
{
|
||||
msi_free_msi_descs_range(dev, MSI_DESC_ALL, 0, MSI_MAX_INDEX);
|
||||
msi_free_msi_descs_range(dev, 0, MSI_MAX_INDEX);
|
||||
}
|
||||
|
||||
void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
|
||||
void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
|
||||
|
||||
void pci_msi_mask_irq(struct irq_data *data);
|
||||
void pci_msi_unmask_irq(struct irq_data *data);
|
||||
|
||||
/*
|
||||
* The arch hooks to setup up msi irqs. Default functions are implemented
|
||||
* as weak symbols so that they /can/ be overriden by architecture specific
|
||||
@ -293,7 +394,7 @@ static inline void msi_device_destroy_sysfs(struct device *dev) { }
|
||||
*/
|
||||
bool arch_restore_msi_irqs(struct pci_dev *dev);
|
||||
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
|
||||
#ifdef CONFIG_GENERIC_MSI_IRQ
|
||||
|
||||
#include <linux/irqhandler.h>
|
||||
|
||||
@ -309,19 +410,22 @@ struct msi_domain_info;
|
||||
* @get_hwirq: Retrieve the resulting hw irq number
|
||||
* @msi_init: Domain specific init function for MSI interrupts
|
||||
* @msi_free: Domain specific function to free a MSI interrupts
|
||||
* @msi_check: Callback for verification of the domain/info/dev data
|
||||
* @msi_prepare: Prepare the allocation of the interrupts in the domain
|
||||
* @prepare_desc: Optional function to prepare the allocated MSI descriptor
|
||||
* in the domain
|
||||
* @set_desc: Set the msi descriptor for an interrupt
|
||||
* @domain_alloc_irqs: Optional function to override the default allocation
|
||||
* function.
|
||||
* @domain_free_irqs: Optional function to override the default free
|
||||
* function.
|
||||
* @msi_post_free: Optional function which is invoked after freeing
|
||||
* all interrupts.
|
||||
*
|
||||
* @get_hwirq, @msi_init and @msi_free are callbacks used by the underlying
|
||||
* irqdomain.
|
||||
*
|
||||
* @msi_check, @msi_prepare and @set_desc are callbacks used by
|
||||
* msi_domain_alloc/free_irqs().
|
||||
* @msi_check, @msi_prepare, @prepare_desc and @set_desc are callbacks used by the
|
||||
* msi_domain_alloc/free_irqs*() variants.
|
||||
*
|
||||
* @domain_alloc_irqs, @domain_free_irqs can be used to override the
|
||||
* default allocation/free functions (__msi_domain_alloc/free_irqs). This
|
||||
@ -329,15 +433,6 @@ struct msi_domain_info;
|
||||
* be wrapped into the regular irq domains concepts by mere mortals. This
|
||||
* allows to universally use msi_domain_alloc/free_irqs without having to
|
||||
* special case XEN all over the place.
|
||||
*
|
||||
* Contrary to other operations @domain_alloc_irqs and @domain_free_irqs
|
||||
* are set to the default implementation if NULL and even when
|
||||
* MSI_FLAG_USE_DEF_DOM_OPS is not set to avoid breaking existing users and
|
||||
* because these callbacks are obviously mandatory.
|
||||
*
|
||||
* This is NOT meant to be abused, but it can be useful to build wrappers
|
||||
* for specialized MSI irq domains which need extra work before and after
|
||||
* calling __msi_domain_alloc_irqs()/__msi_domain_free_irqs().
|
||||
*/
|
||||
struct msi_domain_ops {
|
||||
irq_hw_number_t (*get_hwirq)(struct msi_domain_info *info,
|
||||
@ -349,23 +444,29 @@ struct msi_domain_ops {
|
||||
void (*msi_free)(struct irq_domain *domain,
|
||||
struct msi_domain_info *info,
|
||||
unsigned int virq);
|
||||
int (*msi_check)(struct irq_domain *domain,
|
||||
struct msi_domain_info *info,
|
||||
struct device *dev);
|
||||
int (*msi_prepare)(struct irq_domain *domain,
|
||||
struct device *dev, int nvec,
|
||||
msi_alloc_info_t *arg);
|
||||
void (*prepare_desc)(struct irq_domain *domain, msi_alloc_info_t *arg,
|
||||
struct msi_desc *desc);
|
||||
void (*set_desc)(msi_alloc_info_t *arg,
|
||||
struct msi_desc *desc);
|
||||
int (*domain_alloc_irqs)(struct irq_domain *domain,
|
||||
struct device *dev, int nvec);
|
||||
void (*domain_free_irqs)(struct irq_domain *domain,
|
||||
struct device *dev);
|
||||
void (*msi_post_free)(struct irq_domain *domain,
|
||||
struct device *dev);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct msi_domain_info - MSI interrupt domain data
|
||||
* @flags: Flags to decribe features and capabilities
|
||||
* @bus_token: The domain bus token
|
||||
* @hwsize: The hardware table size or the software index limit.
|
||||
* If 0 then the size is considered unlimited and
|
||||
* gets initialized to the maximum software index limit
|
||||
* by the domain creation code.
|
||||
* @ops: The callback data structure
|
||||
* @chip: Optional: associated interrupt chip
|
||||
* @chip_data: Optional: associated interrupt chip data
|
||||
@ -376,6 +477,8 @@ struct msi_domain_ops {
|
||||
*/
|
||||
struct msi_domain_info {
|
||||
u32 flags;
|
||||
enum irq_domain_bus_token bus_token;
|
||||
unsigned int hwsize;
|
||||
struct msi_domain_ops *ops;
|
||||
struct irq_chip *chip;
|
||||
void *chip_data;
|
||||
@ -385,7 +488,30 @@ struct msi_domain_info {
|
||||
void *data;
|
||||
};
|
||||
|
||||
/* Flags for msi_domain_info */
|
||||
/**
|
||||
* struct msi_domain_template - Template for MSI device domains
|
||||
* @name: Storage for the resulting name. Filled in by the core.
|
||||
* @chip: Interrupt chip for this domain
|
||||
* @ops: MSI domain ops
|
||||
* @info: MSI domain info data
|
||||
*/
|
||||
struct msi_domain_template {
|
||||
char name[48];
|
||||
struct irq_chip chip;
|
||||
struct msi_domain_ops ops;
|
||||
struct msi_domain_info info;
|
||||
};
|
||||
|
||||
/*
|
||||
* Flags for msi_domain_info
|
||||
*
|
||||
* Bit 0-15: Generic MSI functionality which is not subject to restriction
|
||||
* by parent domains
|
||||
*
|
||||
* Bit 16-31: Functionality which depends on the underlying parent domain and
|
||||
* can be masked out by msi_parent_ops::init_dev_msi_info() when
|
||||
* a device MSI domain is initialized.
|
||||
*/
|
||||
enum {
|
||||
/*
|
||||
* Init non implemented ops callbacks with default MSI domain
|
||||
@ -397,44 +523,100 @@ enum {
|
||||
* callbacks.
|
||||
*/
|
||||
MSI_FLAG_USE_DEF_CHIP_OPS = (1 << 1),
|
||||
/* Support multiple PCI MSI interrupts */
|
||||
MSI_FLAG_MULTI_PCI_MSI = (1 << 2),
|
||||
/* Support PCI MSIX interrupts */
|
||||
MSI_FLAG_PCI_MSIX = (1 << 3),
|
||||
/* Needs early activate, required for PCI */
|
||||
MSI_FLAG_ACTIVATE_EARLY = (1 << 4),
|
||||
MSI_FLAG_ACTIVATE_EARLY = (1 << 2),
|
||||
/*
|
||||
* Must reactivate when irq is started even when
|
||||
* MSI_FLAG_ACTIVATE_EARLY has been set.
|
||||
*/
|
||||
MSI_FLAG_MUST_REACTIVATE = (1 << 5),
|
||||
/* Is level-triggered capable, using two messages */
|
||||
MSI_FLAG_LEVEL_CAPABLE = (1 << 6),
|
||||
MSI_FLAG_MUST_REACTIVATE = (1 << 3),
|
||||
/* Populate sysfs on alloc() and destroy it on free() */
|
||||
MSI_FLAG_DEV_SYSFS = (1 << 7),
|
||||
/* MSI-X entries must be contiguous */
|
||||
MSI_FLAG_MSIX_CONTIGUOUS = (1 << 8),
|
||||
MSI_FLAG_DEV_SYSFS = (1 << 4),
|
||||
/* Allocate simple MSI descriptors */
|
||||
MSI_FLAG_ALLOC_SIMPLE_MSI_DESCS = (1 << 9),
|
||||
MSI_FLAG_ALLOC_SIMPLE_MSI_DESCS = (1 << 5),
|
||||
/* Free MSI descriptors */
|
||||
MSI_FLAG_FREE_MSI_DESCS = (1 << 10),
|
||||
MSI_FLAG_FREE_MSI_DESCS = (1 << 6),
|
||||
/*
|
||||
* Quirk to handle MSI implementations which do not provide
|
||||
* masking. Currently known to affect x86, but has to be partially
|
||||
* handled in the core MSI code.
|
||||
*/
|
||||
MSI_FLAG_NOMASK_QUIRK = (1 << 7),
|
||||
|
||||
/* Mask for the generic functionality */
|
||||
MSI_GENERIC_FLAGS_MASK = GENMASK(15, 0),
|
||||
|
||||
/* Mask for the domain specific functionality */
|
||||
MSI_DOMAIN_FLAGS_MASK = GENMASK(31, 16),
|
||||
|
||||
/* Support multiple PCI MSI interrupts */
|
||||
MSI_FLAG_MULTI_PCI_MSI = (1 << 16),
|
||||
/* Support PCI MSIX interrupts */
|
||||
MSI_FLAG_PCI_MSIX = (1 << 17),
|
||||
/* Is level-triggered capable, using two messages */
|
||||
MSI_FLAG_LEVEL_CAPABLE = (1 << 18),
|
||||
/* MSI-X entries must be contiguous */
|
||||
MSI_FLAG_MSIX_CONTIGUOUS = (1 << 19),
|
||||
/* PCI/MSI-X vectors can be dynamically allocated/freed post MSI-X enable */
|
||||
MSI_FLAG_PCI_MSIX_ALLOC_DYN = (1 << 20),
|
||||
/* Support for PCI/IMS */
|
||||
MSI_FLAG_PCI_IMS = (1 << 21),
|
||||
};
|
||||
|
||||
/**
|
||||
* struct msi_parent_ops - MSI parent domain callbacks and configuration info
|
||||
*
|
||||
* @supported_flags: Required: The supported MSI flags of the parent domain
|
||||
* @prefix: Optional: Prefix for the domain and chip name
|
||||
* @init_dev_msi_info: Required: Callback for MSI parent domains to setup parent
|
||||
* domain specific domain flags, domain ops and interrupt chip
|
||||
* callbacks when a per device domain is created.
|
||||
*/
|
||||
struct msi_parent_ops {
|
||||
u32 supported_flags;
|
||||
const char *prefix;
|
||||
bool (*init_dev_msi_info)(struct device *dev, struct irq_domain *domain,
|
||||
struct irq_domain *msi_parent_domain,
|
||||
struct msi_domain_info *msi_child_info);
|
||||
};
|
||||
|
||||
bool msi_parent_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
||||
struct irq_domain *msi_parent_domain,
|
||||
struct msi_domain_info *msi_child_info);
|
||||
|
||||
int msi_domain_set_affinity(struct irq_data *data, const struct cpumask *mask,
|
||||
bool force);
|
||||
|
||||
struct irq_domain *msi_create_irq_domain(struct fwnode_handle *fwnode,
|
||||
struct msi_domain_info *info,
|
||||
struct irq_domain *parent);
|
||||
int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
|
||||
int nvec);
|
||||
int msi_domain_alloc_irqs_descs_locked(struct irq_domain *domain, struct device *dev,
|
||||
int nvec);
|
||||
int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
|
||||
int nvec);
|
||||
void __msi_domain_free_irqs(struct irq_domain *domain, struct device *dev);
|
||||
void msi_domain_free_irqs_descs_locked(struct irq_domain *domain, struct device *dev);
|
||||
void msi_domain_free_irqs(struct irq_domain *domain, struct device *dev);
|
||||
|
||||
bool msi_create_device_irq_domain(struct device *dev, unsigned int domid,
|
||||
const struct msi_domain_template *template,
|
||||
unsigned int hwsize, void *domain_data,
|
||||
void *chip_data);
|
||||
void msi_remove_device_irq_domain(struct device *dev, unsigned int domid);
|
||||
|
||||
bool msi_match_device_irq_domain(struct device *dev, unsigned int domid,
|
||||
enum irq_domain_bus_token bus_token);
|
||||
|
||||
int msi_domain_alloc_irqs_range_locked(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last);
|
||||
int msi_domain_alloc_irqs_range(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last);
|
||||
int msi_domain_alloc_irqs_all_locked(struct device *dev, unsigned int domid, int nirqs);
|
||||
|
||||
struct msi_map msi_domain_alloc_irq_at(struct device *dev, unsigned int domid, unsigned int index,
|
||||
const struct irq_affinity_desc *affdesc,
|
||||
union msi_instance_cookie *cookie);
|
||||
|
||||
void msi_domain_free_irqs_range_locked(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last);
|
||||
void msi_domain_free_irqs_range(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last);
|
||||
void msi_domain_free_irqs_all_locked(struct device *dev, unsigned int domid);
|
||||
void msi_domain_free_irqs_all(struct device *dev, unsigned int domid);
|
||||
|
||||
struct msi_domain_info *msi_get_domain_info(struct irq_domain *domain);
|
||||
|
||||
struct irq_domain *platform_msi_create_irq_domain(struct fwnode_handle *fwnode,
|
||||
@ -467,20 +649,27 @@ int platform_msi_device_domain_alloc(struct irq_domain *domain, unsigned int vir
|
||||
void platform_msi_device_domain_free(struct irq_domain *domain, unsigned int virq,
|
||||
unsigned int nvec);
|
||||
void *platform_msi_get_host_data(struct irq_domain *domain);
|
||||
#endif /* CONFIG_GENERIC_MSI_IRQ_DOMAIN */
|
||||
#endif /* CONFIG_GENERIC_MSI_IRQ */
|
||||
|
||||
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
|
||||
/* PCI specific interfaces */
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc);
|
||||
void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg);
|
||||
void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
|
||||
void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
|
||||
void pci_msi_mask_irq(struct irq_data *data);
|
||||
void pci_msi_unmask_irq(struct irq_data *data);
|
||||
struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
|
||||
struct msi_domain_info *info,
|
||||
struct irq_domain *parent);
|
||||
u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev);
|
||||
struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev);
|
||||
bool pci_dev_has_special_msi_domain(struct pci_dev *pdev);
|
||||
#else
|
||||
#else /* CONFIG_PCI_MSI */
|
||||
static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */
|
||||
static inline void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg) { }
|
||||
#endif /* !CONFIG_PCI_MSI */
|
||||
|
||||
#endif /* LINUX_MSI_H */
|
||||
|
73
include/linux/msi_api.h
Normal file
73
include/linux/msi_api.h
Normal file
@ -0,0 +1,73 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef LINUX_MSI_API_H
|
||||
#define LINUX_MSI_API_H
|
||||
|
||||
/*
|
||||
* APIs which are relevant for device driver code for allocating and
|
||||
* freeing MSI interrupts and querying the associations between
|
||||
* hardware/software MSI indices and the Linux interrupt number.
|
||||
*/
|
||||
|
||||
struct device;
|
||||
|
||||
/*
|
||||
* Per device interrupt domain related constants.
|
||||
*/
|
||||
enum msi_domain_ids {
|
||||
MSI_DEFAULT_DOMAIN,
|
||||
MSI_SECONDARY_DOMAIN,
|
||||
MSI_MAX_DEVICE_IRQDOMAINS,
|
||||
};
|
||||
|
||||
/**
|
||||
* union msi_instance_cookie - MSI instance cookie
|
||||
* @value: u64 value store
|
||||
* @ptr: Pointer to usage site specific data
|
||||
*
|
||||
* This cookie is handed to the IMS allocation function and stored in the
|
||||
* MSI descriptor for the interrupt chip callbacks.
|
||||
*
|
||||
* The content of this cookie is MSI domain implementation defined. For
|
||||
* PCI/IMS implementations this could be a PASID or a pointer to queue
|
||||
* memory.
|
||||
*/
|
||||
union msi_instance_cookie {
|
||||
u64 value;
|
||||
void *ptr;
|
||||
};
|
||||
|
||||
/**
|
||||
* msi_map - Mapping between MSI index and Linux interrupt number
|
||||
* @index: The MSI index, e.g. slot in the MSI-X table or
|
||||
* a software managed index if >= 0. If negative
|
||||
* the allocation function failed and it contains
|
||||
* the error code.
|
||||
* @virq: The associated Linux interrupt number
|
||||
*/
|
||||
struct msi_map {
|
||||
int index;
|
||||
int virq;
|
||||
};
|
||||
|
||||
/*
|
||||
* Constant to be used for dynamic allocations when the allocation is any
|
||||
* free MSI index, which is either an entry in a hardware table or a
|
||||
* software managed index.
|
||||
*/
|
||||
#define MSI_ANY_INDEX UINT_MAX
|
||||
|
||||
unsigned int msi_domain_get_virq(struct device *dev, unsigned int domid, unsigned int index);
|
||||
|
||||
/**
|
||||
* msi_get_virq - Lookup the Linux interrupt number for a MSI index on the default interrupt domain
|
||||
* @dev: Device for which the lookup happens
|
||||
* @index: The MSI index to lookup
|
||||
*
|
||||
* Return: The Linux interrupt number on success (> 0), 0 if not found
|
||||
*/
|
||||
static inline unsigned int msi_get_virq(struct device *dev, unsigned int index)
|
||||
{
|
||||
return msi_domain_get_virq(dev, MSI_DEFAULT_DOMAIN, index);
|
||||
}
|
||||
|
||||
#endif
|
@ -38,6 +38,7 @@
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/resource_ext.h>
|
||||
#include <linux/msi_api.h>
|
||||
#include <uapi/linux/pci.h>
|
||||
|
||||
#include <linux/pci_ids.h>
|
||||
@ -1553,10 +1554,17 @@ static inline int pci_enable_msix_exact(struct pci_dev *dev,
|
||||
return rc;
|
||||
return 0;
|
||||
}
|
||||
int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags);
|
||||
int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags,
|
||||
struct irq_affinity *affd);
|
||||
|
||||
bool pci_msix_can_alloc_dyn(struct pci_dev *dev);
|
||||
struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
|
||||
const struct irq_affinity_desc *affdesc);
|
||||
void pci_msix_free_irq(struct pci_dev *pdev, struct msi_map map);
|
||||
|
||||
void pci_free_irq_vectors(struct pci_dev *dev);
|
||||
int pci_irq_vector(struct pci_dev *dev, unsigned int nr);
|
||||
const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, int vec);
|
||||
@ -1586,6 +1594,13 @@ pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
|
||||
return 1;
|
||||
return -ENOSPC;
|
||||
}
|
||||
static inline int
|
||||
pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags)
|
||||
{
|
||||
return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
|
||||
flags, NULL);
|
||||
}
|
||||
|
||||
static inline void pci_free_irq_vectors(struct pci_dev *dev)
|
||||
{
|
||||
@ -1898,15 +1913,13 @@ pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
|
||||
{
|
||||
return -ENOSPC;
|
||||
}
|
||||
#endif /* CONFIG_PCI */
|
||||
|
||||
static inline int
|
||||
pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags)
|
||||
{
|
||||
return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags,
|
||||
NULL);
|
||||
return -ENOSPC;
|
||||
}
|
||||
#endif /* CONFIG_PCI */
|
||||
|
||||
/* Include architecture-dependent settings and functions */
|
||||
|
||||
@ -2474,6 +2487,14 @@ static inline bool pci_is_thunderbolt_attached(struct pci_dev *pdev)
|
||||
void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type);
|
||||
#endif
|
||||
|
||||
struct msi_domain_template;
|
||||
|
||||
bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template,
|
||||
unsigned int hwsize, void *data);
|
||||
struct msi_map pci_ims_alloc_irq(struct pci_dev *pdev, union msi_instance_cookie *icookie,
|
||||
const struct irq_affinity_desc *affdesc);
|
||||
void pci_ims_free_irq(struct pci_dev *pdev, struct msi_map map);
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
|
||||
#define pci_printk(level, pdev, fmt, arg...) \
|
||||
|
@ -86,15 +86,10 @@ config GENERIC_IRQ_IPI
|
||||
depends on SMP
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
|
||||
# Generic MSI interrupt support
|
||||
# Generic MSI hierarchical interrupt domain support
|
||||
config GENERIC_MSI_IRQ
|
||||
bool
|
||||
|
||||
# Generic MSI hierarchical interrupt domain support
|
||||
config GENERIC_MSI_IRQ_DOMAIN
|
||||
bool
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
select GENERIC_MSI_IRQ
|
||||
|
||||
config IRQ_MSI_IOMMU
|
||||
bool
|
||||
|
@ -1561,10 +1561,10 @@ int irq_chip_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct device *irq_get_parent_device(struct irq_data *data)
|
||||
static struct device *irq_get_pm_device(struct irq_data *data)
|
||||
{
|
||||
if (data->domain)
|
||||
return data->domain->dev;
|
||||
return data->domain->pm_dev;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
@ -1578,7 +1578,7 @@ static struct device *irq_get_parent_device(struct irq_data *data)
|
||||
*/
|
||||
int irq_chip_pm_get(struct irq_data *data)
|
||||
{
|
||||
struct device *dev = irq_get_parent_device(data);
|
||||
struct device *dev = irq_get_pm_device(data);
|
||||
int retval = 0;
|
||||
|
||||
if (IS_ENABLED(CONFIG_PM) && dev)
|
||||
@ -1597,7 +1597,7 @@ int irq_chip_pm_get(struct irq_data *data)
|
||||
*/
|
||||
int irq_chip_pm_put(struct irq_data *data)
|
||||
{
|
||||
struct device *dev = irq_get_parent_device(data);
|
||||
struct device *dev = irq_get_pm_device(data);
|
||||
int retval = 0;
|
||||
|
||||
if (IS_ENABLED(CONFIG_PM) && dev)
|
||||
|
@ -52,6 +52,7 @@ enum {
|
||||
* IRQS_PENDING - irq is pending and replayed later
|
||||
* IRQS_SUSPENDED - irq is suspended
|
||||
* IRQS_NMI - irq line is used to deliver NMIs
|
||||
* IRQS_SYSFS - descriptor has been added to sysfs
|
||||
*/
|
||||
enum {
|
||||
IRQS_AUTODETECT = 0x00000001,
|
||||
@ -64,6 +65,7 @@ enum {
|
||||
IRQS_SUSPENDED = 0x00000800,
|
||||
IRQS_TIMINGS = 0x00001000,
|
||||
IRQS_NMI = 0x00002000,
|
||||
IRQS_SYSFS = 0x00004000,
|
||||
};
|
||||
|
||||
#include "debug.h"
|
||||
|
@ -288,22 +288,25 @@ static void irq_sysfs_add(int irq, struct irq_desc *desc)
|
||||
if (irq_kobj_base) {
|
||||
/*
|
||||
* Continue even in case of failure as this is nothing
|
||||
* crucial.
|
||||
* crucial and failures in the late irq_sysfs_init()
|
||||
* cannot be rolled back.
|
||||
*/
|
||||
if (kobject_add(&desc->kobj, irq_kobj_base, "%d", irq))
|
||||
pr_warn("Failed to add kobject for irq %d\n", irq);
|
||||
else
|
||||
desc->istate |= IRQS_SYSFS;
|
||||
}
|
||||
}
|
||||
|
||||
static void irq_sysfs_del(struct irq_desc *desc)
|
||||
{
|
||||
/*
|
||||
* If irq_sysfs_init() has not yet been invoked (early boot), then
|
||||
* irq_kobj_base is NULL and the descriptor was never added.
|
||||
* kobject_del() complains about a object with no parent, so make
|
||||
* it conditional.
|
||||
* Only invoke kobject_del() when kobject_add() was successfully
|
||||
* invoked for the descriptor. This covers both early boot, where
|
||||
* sysfs is not initialized yet, and the case of a failed
|
||||
* kobject_add() invocation.
|
||||
*/
|
||||
if (irq_kobj_base)
|
||||
if (desc->istate & IRQS_SYSFS)
|
||||
kobject_del(&desc->kobj);
|
||||
}
|
||||
|
||||
|
@ -321,7 +321,7 @@ static int irq_try_set_affinity(struct irq_data *data,
|
||||
}
|
||||
|
||||
static bool irq_set_affinity_deactivated(struct irq_data *data,
|
||||
const struct cpumask *mask, bool force)
|
||||
const struct cpumask *mask)
|
||||
{
|
||||
struct irq_desc *desc = irq_data_to_desc(data);
|
||||
|
||||
@ -354,7 +354,7 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
|
||||
if (!chip || !chip->irq_set_affinity)
|
||||
return -EINVAL;
|
||||
|
||||
if (irq_set_affinity_deactivated(data, mask, force))
|
||||
if (irq_set_affinity_deactivated(data, mask))
|
||||
return 0;
|
||||
|
||||
if (irq_can_move_pcntxt(data) && !irqd_is_setaffinity_pending(data)) {
|
||||
|
904
kernel/irq/msi.c
904
kernel/irq/msi.c
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user