diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-09-09 01:47:43 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-09-09 01:47:43 +0300 |
commit | 0d519f2d1ed1f11e49abc88cfcf6cf13b83ba14c (patch) | |
tree | fe2bfed7c6e8448f9661216610ec81f1e0c28515 /drivers/pci | |
parent | 0756b7fbb696d2cb18785da9cab13ec164017f64 (diff) | |
parent | cf2d804110d3c20dc6865ade514c44179de34855 (diff) | |
download | linux-0d519f2d1ed1f11e49abc88cfcf6cf13b83ba14c.tar.xz |
Merge tag 'pci-v4.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- add enhanced Downstream Port Containment support, which prints more
details about Root Port Programmed I/O errors (Dongdong Liu)
- add Layerscape ls1088a and ls2088a support (Hou Zhiqiang)
- add MediaTek MT2712 and MT7622 support (Ryder Lee)
- add MediaTek MT2712 and MT7622 MSI support (Honghui Zhang)
- add Qualcom IPQ8074 support (Varadarajan Narayanan)
- add R-Car r8a7743/5 device tree support (Biju Das)
- add Rockchip per-lane PHY support for better power management (Shawn
Lin)
- fix IRQ mapping for hot-added devices by replacing the
pci_fixup_irqs() boot-time design with a host bridge hook called at
probe-time (Lorenzo Pieralisi, Matthew Minter)
- fix race when enabling two devices that results in upstream bridge
not being enabled correctly (Srinath Mannam)
- fix pciehp power fault infinite loop (Keith Busch)
- fix SHPC bridge MSI hotplug events by enabling bus mastering
(Aleksandr Bezzubikov)
- fix a VFIO issue by correcting PCIe capability sizes (Alex
Williamson)
- fix an INTD issue on Xilinx and possibly other drivers by unifying
INTx IRQ domain support (Paul Burton)
- avoid IOMMU stalls by marking AMD Stoney GPU ATS as broken (Joerg
Roedel)
- allow APM X-Gene device assignment to guests by adding an ACS quirk
(Feng Kan)
- fix driver crashes by disabling Extended Tags on Broadcom HT2100
(Extended Tags support is required for PCIe Receivers but not
Requesters, and we now enable them by default when Requesters support
them) (Sinan Kaya)
- fix MSIs for devices that use phantom RIDs for DMA by assuming MSIs
use the real Requester ID (not a phantom RID) (Robin Murphy)
- prevent assignment of Intel VMD children to guests (which may be
supported eventually, but isn't yet) by not associating an IOMMU with
them (Jon Derrick)
- fix Intel VMD suspend/resume by releasing IRQs on suspend (Scott
Bauer)
- fix a Function-Level Reset issue with Intel 750 NVMe by waiting
longer (up to 60sec instead of 1sec) for device to become ready
(Sinan Kaya)
- fix a Function-Level Reset issue on iProc Stingray by working around
hardware defects in the CRS implementation (Oza Pawandeep)
- fix an issue with Intel NVMe P3700 after an iProc reset by adding a
delay during shutdown (Oza Pawandeep)
- fix a Microsoft Hyper-V lockdep issue by polling instead of blocking
in compose_msi_msg() (Stephen Hemminger)
- fix a wireless LAN driver timeout by clearing DesignWare MSI
interrupt status after it is handled, not before (Faiz Abbas)
- fix DesignWare ATU enable checking (Jisheng Zhang)
- reduce Layerscape dependencies on the bootloader by doing more
initialization in the driver (Hou Zhiqiang)
- improve Intel VMD performance allowing allocation of more IRQ vectors
than present CPUs (Keith Busch)
- improve endpoint framework support for initial DMA mask, different
BAR sizes, configurable page sizes, MSI, test driver, etc (Kishon
Vijay Abraham I, Stan Drozd)
- rework CRS support to add periodic messages while we poll during
enumeration and after Function-Level Reset and prepare for possible
other uses of CRS (Sinan Kaya)
- clean up Root Port AER handling by removing unnecessary code and
moving error handler methods to struct pcie_port_service_driver
(Christoph Hellwig)
- clean up error handling paths in various drivers (Bjorn Andersson,
Fabio Estevam, Gustavo A. R. Silva, Harunobu Kurokawa, Jeffy Chen,
Lorenzo Pieralisi, Sergei Shtylyov)
- clean up SR-IOV resource handling by disabling VF decoding before
updating the corresponding resource structs (Gavin Shan)
- clean up DesignWare-based drivers by unifying quirks to update Class
Code and Interrupt Pin and related handling of write-protected
registers (Hou Zhiqiang)
- clean up by adding empty generic pcibios_align_resource() and
pcibios_fixup_bus() and removing empty arch-specific implementations
(Palmer Dabbelt)
- request exclusive reset control for several drivers to allow cleanup
elsewhere (Philipp Zabel)
- constify various structures (Arvind Yadav, Bhumika Goyal)
- convert from full_name() to %pOF (Rob Herring)
- remove unused variables from iProc, HiSi, Altera, Keystone (Shawn
Lin)
* tag 'pci-v4.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (170 commits)
PCI: xgene: Clean up whitespace
PCI: xgene: Define XGENE_PCI_EXP_CAP and use generic PCI_EXP_RTCTL offset
PCI: xgene: Fix platform_get_irq() error handling
PCI: xilinx-nwl: Fix platform_get_irq() error handling
PCI: rockchip: Fix platform_get_irq() error handling
PCI: altera: Fix platform_get_irq() error handling
PCI: spear13xx: Fix platform_get_irq() error handling
PCI: artpec6: Fix platform_get_irq() error handling
PCI: armada8k: Fix platform_get_irq() error handling
PCI: dra7xx: Fix platform_get_irq() error handling
PCI: exynos: Fix platform_get_irq() error handling
PCI: iproc: Clean up whitespace
PCI: iproc: Rename PCI_EXP_CAP to IPROC_PCI_EXP_CAP
PCI: iproc: Add 500ms delay during device shutdown
PCI: Fix typos and whitespace errors
PCI: Remove unused "res" variable from pci_resource_io()
PCI: Correct kernel-doc of pci_vpd_srdt_size(), pci_vpd_srdt_tag()
PCI/AER: Reformat AER register definitions
iommu/vt-d: Prevent VMD child devices from being remapping targets
x86/PCI: Use is_vmd() rather than relying on the domain number
...
Diffstat (limited to 'drivers/pci')
70 files changed, 2438 insertions, 1002 deletions
diff --git a/drivers/pci/dwc/Kconfig b/drivers/pci/dwc/Kconfig index d275aadc47ee..22ec82fcdea2 100644 --- a/drivers/pci/dwc/Kconfig +++ b/drivers/pci/dwc/Kconfig @@ -25,7 +25,7 @@ config PCI_DRA7XX work either as EP or RC. In order to enable host-specific features PCI_DRA7XX_HOST must be selected and in order to enable device- specific features PCI_DRA7XX_EP must be selected. This uses - the Designware core. + the DesignWare core. if PCI_DRA7XX @@ -97,8 +97,8 @@ config PCI_KEYSTONE select PCIE_DW_HOST help Say Y here if you want to enable PCI controller support on Keystone - SoCs. The PCI controller on Keystone is based on Designware hardware - and therefore the driver re-uses the Designware core functions to + SoCs. The PCI controller on Keystone is based on DesignWare hardware + and therefore the driver re-uses the DesignWare core functions to implement the driver. config PCI_LAYERSCAPE @@ -132,7 +132,7 @@ config PCIE_QCOM select PCIE_DW_HOST help Say Y here to enable PCIe controller support on Qualcomm SoCs. The - PCIe controller uses the Designware core plus Qualcomm-specific + PCIe controller uses the DesignWare core plus Qualcomm-specific hardware wrappers. config PCIE_ARMADA_8K @@ -145,8 +145,8 @@ config PCIE_ARMADA_8K help Say Y here if you want to enable PCIe controller support on Armada-8K SoCs. The PCIe controller on Armada-8K is based on - Designware hardware and therefore the driver re-uses the - Designware core functions to implement the driver. + DesignWare hardware and therefore the driver re-uses the + DesignWare core functions to implement the driver. config PCIE_ARTPEC6 bool "Axis ARTPEC-6 PCIe controller" diff --git a/drivers/pci/dwc/pci-dra7xx.c b/drivers/pci/dwc/pci-dra7xx.c index f2fc5f47064e..34427a6a15af 100644 --- a/drivers/pci/dwc/pci-dra7xx.c +++ b/drivers/pci/dwc/pci-dra7xx.c @@ -195,7 +195,7 @@ static void dra7xx_pcie_enable_interrupts(struct dra7xx_pcie *dra7xx) dra7xx_pcie_enable_msi_interrupts(dra7xx); } -static void dra7xx_pcie_host_init(struct pcie_port *pp) +static int dra7xx_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); @@ -206,6 +206,8 @@ static void dra7xx_pcie_host_init(struct pcie_port *pp) dw_pcie_wait_for_link(pci); dw_pcie_msi_init(pp); dra7xx_pcie_enable_interrupts(dra7xx); + + return 0; } static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = { @@ -238,7 +240,7 @@ static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp) return -ENODEV; } - dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, 4, + dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, &intx_domain_ops, pp); if (!dra7xx->irq_domain) { dev_err(dev, "Failed to get a INTx IRQ domain\n"); @@ -275,7 +277,6 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg) return IRQ_HANDLED; } - static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg) { struct dra7xx_pcie *dra7xx = arg; @@ -335,10 +336,23 @@ static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg) return IRQ_HANDLED; } +static void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) +{ + u32 reg; + + reg = PCI_BASE_ADDRESS_0 + (4 * bar); + dw_pcie_writel_dbi2(pci, reg, 0x0); + dw_pcie_writel_dbi(pci, reg, 0x0); +} + static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep) { struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); + enum pci_barno bar; + + for (bar = BAR_0; bar <= BAR_5; bar++) + dw_pcie_ep_reset_bar(pci, bar); dra7xx_pcie_enable_wrapper_interrupts(dra7xx); } @@ -435,7 +449,7 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx, pp->irq = platform_get_irq(pdev, 1); if (pp->irq < 0) { dev_err(dev, "missing IRQ resource\n"); - return -EINVAL; + return pp->irq; } ret = devm_request_irq(dev, pp->irq, dra7xx_pcie_msi_irq_handler, @@ -616,8 +630,8 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) irq = platform_get_irq(pdev, 0); if (irq < 0) { - dev_err(dev, "missing IRQ resource\n"); - return -EINVAL; + dev_err(dev, "missing IRQ resource: %d\n", irq); + return irq; } res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf"); diff --git a/drivers/pci/dwc/pci-exynos.c b/drivers/pci/dwc/pci-exynos.c index c78c06552590..5596fdedbb94 100644 --- a/drivers/pci/dwc/pci-exynos.c +++ b/drivers/pci/dwc/pci-exynos.c @@ -581,13 +581,15 @@ static int exynos_pcie_link_up(struct dw_pcie *pci) return 0; } -static void exynos_pcie_host_init(struct pcie_port *pp) +static int exynos_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct exynos_pcie *ep = to_exynos_pcie(pci); exynos_pcie_establish_link(ep); exynos_pcie_enable_interrupts(ep); + + return 0; } static const struct dw_pcie_host_ops exynos_pcie_host_ops = { @@ -605,9 +607,9 @@ static int __init exynos_add_pcie_port(struct exynos_pcie *ep, int ret; pp->irq = platform_get_irq(pdev, 1); - if (!pp->irq) { + if (pp->irq < 0) { dev_err(dev, "failed to get irq\n"); - return -ENODEV; + return pp->irq; } ret = devm_request_irq(dev, pp->irq, exynos_pcie_irq_handler, IRQF_SHARED, "exynos-pcie", ep); @@ -618,9 +620,9 @@ static int __init exynos_add_pcie_port(struct exynos_pcie *ep, if (IS_ENABLED(CONFIG_PCI_MSI)) { pp->msi_irq = platform_get_irq(pdev, 0); - if (!pp->msi_irq) { + if (pp->msi_irq < 0) { dev_err(dev, "failed to get msi irq\n"); - return -ENODEV; + return pp->msi_irq; } ret = devm_request_irq(dev, pp->msi_irq, diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c index bf5c3616e344..b73483534a5b 100644 --- a/drivers/pci/dwc/pci-imx6.c +++ b/drivers/pci/dwc/pci-imx6.c @@ -636,7 +636,7 @@ err_reset_phy: return ret; } -static void imx6_pcie_host_init(struct pcie_port *pp) +static int imx6_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci); @@ -649,6 +649,8 @@ static void imx6_pcie_host_init(struct pcie_port *pp) if (IS_ENABLED(CONFIG_PCI_MSI)) dw_pcie_msi_init(pp); + + return 0; } static int imx6_pcie_link_up(struct dw_pcie *pci) @@ -778,14 +780,15 @@ static int imx6_pcie_probe(struct platform_device *pdev) } break; case IMX7D: - imx6_pcie->pciephy_reset = devm_reset_control_get(dev, - "pciephy"); + imx6_pcie->pciephy_reset = devm_reset_control_get_exclusive(dev, + "pciephy"); if (IS_ERR(imx6_pcie->pciephy_reset)) { dev_err(dev, "Failed to get PCIEPHY reset control\n"); return PTR_ERR(imx6_pcie->pciephy_reset); } - imx6_pcie->apps_reset = devm_reset_control_get(dev, "apps"); + imx6_pcie->apps_reset = devm_reset_control_get_exclusive(dev, + "apps"); if (IS_ERR(imx6_pcie->apps_reset)) { dev_err(dev, "Failed to get PCIE APPS reset control\n"); return PTR_ERR(imx6_pcie->apps_reset); diff --git a/drivers/pci/dwc/pci-keystone-dw.c b/drivers/pci/dwc/pci-keystone-dw.c index 8bc626e640c8..2fb20b887d2a 100644 --- a/drivers/pci/dwc/pci-keystone-dw.c +++ b/drivers/pci/dwc/pci-keystone-dw.c @@ -1,5 +1,5 @@ /* - * Designware application register space functions for Keystone PCI controller + * DesignWare application register space functions for Keystone PCI controller * * Copyright (C) 2013-2014 Texas Instruments., Ltd. * http://www.ti.com @@ -168,16 +168,12 @@ void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) static void ks_dw_pcie_msi_irq_mask(struct irq_data *d) { - struct keystone_pcie *ks_pcie; struct msi_desc *msi; struct pcie_port *pp; - struct dw_pcie *pci; u32 offset; msi = irq_data_get_msi_desc(d); pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); - pci = to_dw_pcie_from_pp(pp); - ks_pcie = to_keystone_pcie(pci); offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); /* Mask the end point if PVM implemented */ @@ -191,16 +187,12 @@ static void ks_dw_pcie_msi_irq_mask(struct irq_data *d) static void ks_dw_pcie_msi_irq_unmask(struct irq_data *d) { - struct keystone_pcie *ks_pcie; struct msi_desc *msi; struct pcie_port *pp; - struct dw_pcie *pci; u32 offset; msi = irq_data_get_msi_desc(d); pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); - pci = to_dw_pcie_from_pp(pp); - ks_pcie = to_keystone_pcie(pci); offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); /* Mask the end point if PVM implemented */ @@ -259,7 +251,7 @@ void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) { int i; - for (i = 0; i < MAX_LEGACY_IRQS; i++) + for (i = 0; i < PCI_NUM_INTX; i++) ks_dw_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); } @@ -565,7 +557,7 @@ int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, /* Create legacy IRQ domain */ ks_pcie->legacy_irq_domain = irq_domain_add_linear(ks_pcie->legacy_intc_np, - MAX_LEGACY_IRQS, + PCI_NUM_INTX, &ks_dw_pcie_legacy_irq_domain_ops, NULL); if (!ks_pcie->legacy_irq_domain) { diff --git a/drivers/pci/dwc/pci-keystone.c b/drivers/pci/dwc/pci-keystone.c index 4783cec1f78d..5bee3af47588 100644 --- a/drivers/pci/dwc/pci-keystone.c +++ b/drivers/pci/dwc/pci-keystone.c @@ -32,10 +32,6 @@ #define DRIVER_NAME "keystone-pcie" -/* driver specific constants */ -#define MAX_MSI_HOST_IRQS 8 -#define MAX_LEGACY_HOST_IRQS 4 - /* DEV_STAT_CTRL */ #define PCIE_CAP_BASE 0x70 @@ -173,7 +169,7 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, if (legacy) { np_temp = &ks_pcie->legacy_intc_np; - max_host_irqs = MAX_LEGACY_HOST_IRQS; + max_host_irqs = PCI_NUM_INTX; host_irqs = &ks_pcie->legacy_host_irqs[0]; } else { np_temp = &ks_pcie->msi_intc_np; @@ -261,7 +257,7 @@ static int keystone_pcie_fault(unsigned long addr, unsigned int fsr, return 0; } -static void __init ks_pcie_host_init(struct pcie_port *pp) +static int __init ks_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); @@ -289,6 +285,8 @@ static void __init ks_pcie_host_init(struct pcie_port *pp) */ hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0, "Asynchronous external abort"); + + return 0; } static const struct dw_pcie_host_ops keystone_pcie_host_ops = { diff --git a/drivers/pci/dwc/pci-keystone.h b/drivers/pci/dwc/pci-keystone.h index 74c5825882df..30b7bc2ac380 100644 --- a/drivers/pci/dwc/pci-keystone.h +++ b/drivers/pci/dwc/pci-keystone.h @@ -12,9 +12,7 @@ * published by the Free Software Foundation. */ -#define MAX_LEGACY_IRQS 4 #define MAX_MSI_HOST_IRQS 8 -#define MAX_LEGACY_HOST_IRQS 4 struct keystone_pcie { struct dw_pcie *pci; @@ -22,7 +20,7 @@ struct keystone_pcie { /* PCI Device ID */ u32 device_id; int num_legacy_host_irqs; - int legacy_host_irqs[MAX_LEGACY_HOST_IRQS]; + int legacy_host_irqs[PCI_NUM_INTX]; struct device_node *legacy_intc_np; int num_msi_host_irqs; diff --git a/drivers/pci/dwc/pci-layerscape.c b/drivers/pci/dwc/pci-layerscape.c index fd861289ad8b..87fa486bee2c 100644 --- a/drivers/pci/dwc/pci-layerscape.c +++ b/drivers/pci/dwc/pci-layerscape.c @@ -33,7 +33,8 @@ /* PEX Internal Configuration Registers */ #define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */ -#define PCIE_DBI_RO_WR_EN 0x8bc /* DBI Read-Only Write Enable Register */ + +#define PCIE_IATU_NUM 6 struct ls_pcie_drvdata { u32 lut_offset; @@ -72,14 +73,6 @@ static void ls_pcie_clear_multifunction(struct ls_pcie *pcie) iowrite8(PCI_HEADER_TYPE_BRIDGE, pci->dbi_base + PCI_HEADER_TYPE); } -/* Fix class value */ -static void ls_pcie_fix_class(struct ls_pcie *pcie) -{ - struct dw_pcie *pci = pcie->pci; - - iowrite16(PCI_CLASS_BRIDGE_PCI, pci->dbi_base + PCI_CLASS_DEVICE); -} - /* Drop MSG TLP except for Vendor MSG */ static void ls_pcie_drop_msg_tlp(struct ls_pcie *pcie) { @@ -91,6 +84,14 @@ static void ls_pcie_drop_msg_tlp(struct ls_pcie *pcie) iowrite32(val, pci->dbi_base + PCIE_STRFMR1); } +static void ls_pcie_disable_outbound_atus(struct ls_pcie *pcie) +{ + int i; + + for (i = 0; i < PCIE_IATU_NUM; i++) + dw_pcie_disable_atu(pcie->pci, DW_PCIE_REGION_OUTBOUND, i); +} + static int ls1021_pcie_link_up(struct dw_pcie *pci) { u32 state; @@ -108,33 +109,6 @@ static int ls1021_pcie_link_up(struct dw_pcie *pci) return 1; } -static void ls1021_pcie_host_init(struct pcie_port *pp) -{ - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - struct ls_pcie *pcie = to_ls_pcie(pci); - struct device *dev = pci->dev; - u32 index[2]; - - pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node, - "fsl,pcie-scfg"); - if (IS_ERR(pcie->scfg)) { - dev_err(dev, "No syscfg phandle specified\n"); - pcie->scfg = NULL; - return; - } - - if (of_property_read_u32_array(dev->of_node, - "fsl,pcie-scfg", index, 2)) { - pcie->scfg = NULL; - return; - } - pcie->index = index[1]; - - dw_pcie_setup_rc(pp); - - ls_pcie_drop_msg_tlp(pcie); -} - static int ls_pcie_link_up(struct dw_pcie *pci) { struct ls_pcie *pcie = to_ls_pcie(pci); @@ -150,16 +124,54 @@ static int ls_pcie_link_up(struct dw_pcie *pci) return 1; } -static void ls_pcie_host_init(struct pcie_port *pp) +static int ls_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct ls_pcie *pcie = to_ls_pcie(pci); - iowrite32(1, pci->dbi_base + PCIE_DBI_RO_WR_EN); - ls_pcie_fix_class(pcie); + /* + * Disable outbound windows configured by the bootloader to avoid + * one transaction hitting multiple outbound windows. + * dw_pcie_setup_rc() will reconfigure the outbound windows. + */ + ls_pcie_disable_outbound_atus(pcie); + + dw_pcie_dbi_ro_wr_en(pci); ls_pcie_clear_multifunction(pcie); + dw_pcie_dbi_ro_wr_dis(pci); + ls_pcie_drop_msg_tlp(pcie); - iowrite32(0, pci->dbi_base + PCIE_DBI_RO_WR_EN); + + dw_pcie_setup_rc(pp); + + return 0; +} + +static int ls1021_pcie_host_init(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct ls_pcie *pcie = to_ls_pcie(pci); + struct device *dev = pci->dev; + u32 index[2]; + int ret; + + pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node, + "fsl,pcie-scfg"); + if (IS_ERR(pcie->scfg)) { + ret = PTR_ERR(pcie->scfg); + dev_err(dev, "No syscfg phandle specified\n"); + pcie->scfg = NULL; + return ret; + } + + if (of_property_read_u32_array(dev->of_node, + "fsl,pcie-scfg", index, 2)) { + pcie->scfg = NULL; + return -EINVAL; + } + pcie->index = index[1]; + + return ls_pcie_host_init(pp); } static int ls_pcie_msi_host_init(struct pcie_port *pp, @@ -232,12 +244,22 @@ static struct ls_pcie_drvdata ls2080_drvdata = { .dw_pcie_ops = &dw_ls_pcie_ops, }; +static struct ls_pcie_drvdata ls2088_drvdata = { + .lut_offset = 0x80000, + .ltssm_shift = 0, + .lut_dbg = 0x407fc, + .ops = &ls_pcie_host_ops, + .dw_pcie_ops = &dw_ls_pcie_ops, +}; + static const struct of_device_id ls_pcie_of_match[] = { { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, { .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata }, { .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata }, { .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata }, + { .compatible = "fsl,ls2088a-pcie", .data = &ls2088_drvdata }, + { .compatible = "fsl,ls1088a-pcie", .data = &ls2088_drvdata }, { }, }; diff --git a/drivers/pci/dwc/pcie-armada8k.c b/drivers/pci/dwc/pcie-armada8k.c index ea8f34af6a85..370d057c0046 100644 --- a/drivers/pci/dwc/pcie-armada8k.c +++ b/drivers/pci/dwc/pcie-armada8k.c @@ -134,13 +134,15 @@ static void armada8k_pcie_establish_link(struct armada8k_pcie *pcie) dev_err(pci->dev, "Link not up after reconfiguration\n"); } -static void armada8k_pcie_host_init(struct pcie_port *pp) +static int armada8k_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct armada8k_pcie *pcie = to_armada8k_pcie(pci); dw_pcie_setup_rc(pp); armada8k_pcie_establish_link(pcie); + + return 0; } static irqreturn_t armada8k_pcie_irq_handler(int irq, void *arg) @@ -176,9 +178,9 @@ static int armada8k_add_pcie_port(struct armada8k_pcie *pcie, pp->ops = &armada8k_pcie_host_ops; pp->irq = platform_get_irq(pdev, 0); - if (!pp->irq) { + if (pp->irq < 0) { dev_err(dev, "failed to get irq for port\n"); - return -ENODEV; + return pp->irq; } ret = devm_request_irq(dev, pp->irq, armada8k_pcie_irq_handler, @@ -226,7 +228,9 @@ static int armada8k_pcie_probe(struct platform_device *pdev) if (IS_ERR(pcie->clk)) return PTR_ERR(pcie->clk); - clk_prepare_enable(pcie->clk); + ret = clk_prepare_enable(pcie->clk); + if (ret) + return ret; /* Get the dw-pcie unit configuration/control registers base. */ base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); diff --git a/drivers/pci/dwc/pcie-artpec6.c b/drivers/pci/dwc/pcie-artpec6.c index 01c6f7823672..6653619db6a1 100644 --- a/drivers/pci/dwc/pcie-artpec6.c +++ b/drivers/pci/dwc/pcie-artpec6.c @@ -141,12 +141,6 @@ static int artpec6_pcie_establish_link(struct artpec6_pcie *artpec6_pcie) artpec6_pcie_writel(artpec6_pcie, PCIECFG, val); usleep_range(100, 200); - /* - * Enable writing to config regs. This is required as the Synopsys - * driver changes the class code. That register needs DBI write enable. - */ - dw_pcie_writel_dbi(pci, MISC_CONTROL_1_OFF, DBI_RO_WR_EN); - /* setup root complex */ dw_pcie_setup_rc(pp); @@ -175,13 +169,15 @@ static void artpec6_pcie_enable_interrupts(struct artpec6_pcie *artpec6_pcie) dw_pcie_msi_init(pp); } -static void artpec6_pcie_host_init(struct pcie_port *pp) +static int artpec6_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci); artpec6_pcie_establish_link(artpec6_pcie); artpec6_pcie_enable_interrupts(artpec6_pcie); + + return 0; } static const struct dw_pcie_host_ops artpec6_pcie_host_ops = { @@ -207,9 +203,9 @@ static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie, if (IS_ENABLED(CONFIG_PCI_MSI)) { pp->msi_irq = platform_get_irq_byname(pdev, "msi"); - if (pp->msi_irq <= 0) { + if (pp->msi_irq < 0) { dev_err(dev, "failed to get MSI irq\n"); - return -ENODEV; + return pp->msi_irq; } ret = devm_request_irq(dev, pp->msi_irq, diff --git a/drivers/pci/dwc/pcie-designware-ep.c b/drivers/pci/dwc/pcie-designware-ep.c index 398406393f37..d53d5f168363 100644 --- a/drivers/pci/dwc/pcie-designware-ep.c +++ b/drivers/pci/dwc/pcie-designware-ep.c @@ -1,5 +1,5 @@ /** - * Synopsys Designware PCIe Endpoint controller driver + * Synopsys DesignWare PCIe Endpoint controller driver * * Copyright (C) 2017 Texas Instruments * Author: Kishon Vijay Abraham I <kishon@ti.com> @@ -283,7 +283,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) { int ret; void *addr; - enum pci_barno bar; struct pci_epc *epc; struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct device *dev = pci->dev; @@ -312,9 +311,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) return -ENOMEM; ep->outbound_addr = addr; - for (bar = BAR_0; bar <= BAR_5; bar++) - dw_pcie_ep_reset_bar(pci, bar); - if (ep->ops->ep_init) ep->ops->ep_init(ep); @@ -328,7 +324,8 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) if (ret < 0) epc->max_functions = 1; - ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size); + ret = __pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, + ep->page_size); if (ret < 0) { dev_err(dev, "Failed to initialize address space\n"); return ret; diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c index d29c020da082..81e2157a7cfb 100644 --- a/drivers/pci/dwc/pcie-designware-host.c +++ b/drivers/pci/dwc/pcie-designware-host.c @@ -1,5 +1,5 @@ /* - * Synopsys Designware PCIe host controller driver + * Synopsys DesignWare PCIe host controller driver * * Copyright (C) 2013 Samsung Electronics Co., Ltd. * http://www.samsung.com @@ -71,9 +71,9 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) while ((pos = find_next_bit((unsigned long *) &val, 32, pos)) != 32) { irq = irq_find_mapping(pp->irq_domain, i * 32 + pos); + generic_handle_irq(irq); dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, 1 << pos); - generic_handle_irq(irq); pos++; } } @@ -401,8 +401,11 @@ int dw_pcie_host_init(struct pcie_port *pp) } } - if (pp->ops->host_init) - pp->ops->host_init(pp); + if (pp->ops->host_init) { + ret = pp->ops->host_init(pp); + if (ret) + goto error; + } pp->root_bus_nr = pp->busn->start; @@ -594,10 +597,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp) dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); /* setup interrupt pins */ + dw_pcie_dbi_ro_wr_en(pci); val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); val &= 0xffff00ff; val |= 0x00000100; dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); + dw_pcie_dbi_ro_wr_dis(pci); /* setup bus numbers */ val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); @@ -634,8 +639,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp) dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); + /* Enable write permission for the DBI read-only register */ + dw_pcie_dbi_ro_wr_en(pci); /* program correct class for RC */ dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); + /* Better disable write permission right after the update */ + dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); val |= PORT_LOGIC_SPEED_CHANGE; diff --git a/drivers/pci/dwc/pcie-designware-plat.c b/drivers/pci/dwc/pcie-designware-plat.c index 091b4e7ad059..168e2380f493 100644 --- a/drivers/pci/dwc/pcie-designware-plat.c +++ b/drivers/pci/dwc/pcie-designware-plat.c @@ -35,7 +35,7 @@ static irqreturn_t dw_plat_pcie_msi_irq_handler(int irq, void *arg) return dw_handle_msi_irq(pp); } -static void dw_plat_pcie_host_init(struct pcie_port *pp) +static int dw_plat_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); @@ -44,6 +44,8 @@ static void dw_plat_pcie_host_init(struct pcie_port *pp) if (IS_ENABLED(CONFIG_PCI_MSI)) dw_pcie_msi_init(pp); + + return 0; } static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = { diff --git a/drivers/pci/dwc/pcie-designware.c b/drivers/pci/dwc/pcie-designware.c index 0e03af279259..88abdddee2ad 100644 --- a/drivers/pci/dwc/pcie-designware.c +++ b/drivers/pci/dwc/pcie-designware.c @@ -1,5 +1,5 @@ /* - * Synopsys Designware PCIe host controller driver + * Synopsys DesignWare PCIe host controller driver * * Copyright (C) 2013 Samsung Electronics Co., Ltd. * http://www.samsung.com @@ -107,8 +107,9 @@ static void dw_pcie_writel_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg, dw_pcie_writel_dbi(pci, offset + reg, val); } -void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, int type, - u64 cpu_addr, u64 pci_addr, u32 size) +static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, + int type, u64 cpu_addr, + u64 pci_addr, u32 size) { u32 retries, val; @@ -177,7 +178,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, */ for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { val = dw_pcie_readl_dbi(pci, PCIE_ATU_CR2); - if (val == PCIE_ATU_ENABLE) + if (val & PCIE_ATU_ENABLE) return; usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); @@ -200,8 +201,9 @@ static void dw_pcie_writel_ib_unroll(struct dw_pcie *pci, u32 index, u32 reg, dw_pcie_writel_dbi(pci, offset + reg, val); } -int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index, int bar, - u64 cpu_addr, enum dw_pcie_as_type as_type) +static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index, + int bar, u64 cpu_addr, + enum dw_pcie_as_type as_type) { int type; u32 retries, val; diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h index b4d2a89f8e58..e5d9d77b778e 100644 --- a/drivers/pci/dwc/pcie-designware.h +++ b/drivers/pci/dwc/pcie-designware.h @@ -1,5 +1,5 @@ /* - * Synopsys Designware PCIe host controller driver + * Synopsys DesignWare PCIe host controller driver * * Copyright (C) 2013 Samsung Electronics Co., Ltd. * http://www.samsung.com @@ -76,6 +76,9 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C +#define PCIE_MISC_CONTROL_1_OFF 0x8BC +#define PCIE_DBI_RO_WR_EN (0x1 << 0) + /* * iATU Unroll-specific register definitions * From 4.80 core version the address translation will be made by unroll @@ -134,7 +137,7 @@ struct dw_pcie_host_ops { unsigned int devfn, int where, int size, u32 *val); int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val); - void (*host_init)(struct pcie_port *pp); + int (*host_init)(struct pcie_port *pp); void (*msi_set_irq)(struct pcie_port *pp, int irq); void (*msi_clear_irq)(struct pcie_port *pp, int irq); phys_addr_t (*get_msi_addr)(struct pcie_port *pp); @@ -186,6 +189,7 @@ struct dw_pcie_ep { struct dw_pcie_ep_ops *ops; phys_addr_t phys_base; size_t addr_size; + size_t page_size; u8 bar_to_atu[6]; phys_addr_t *outbound_addr; unsigned long ib_window_map; @@ -279,6 +283,28 @@ static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg) return __dw_pcie_read_dbi(pci, pci->dbi_base2, reg, 0x4); } +static inline void dw_pcie_dbi_ro_wr_en(struct dw_pcie *pci) +{ + u32 reg; + u32 val; + + reg = PCIE_MISC_CONTROL_1_OFF; + val = dw_pcie_readl_dbi(pci, reg); + val |= PCIE_DBI_RO_WR_EN; + dw_pcie_writel_dbi(pci, reg, val); +} + +static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci) +{ + u32 reg; + u32 val; + + reg = PCIE_MISC_CONTROL_1_OFF; + val = dw_pcie_readl_dbi(pci, reg); + val &= ~PCIE_DBI_RO_WR_EN; + dw_pcie_writel_dbi(pci, reg, val); +} + #ifdef CONFIG_PCIE_DW_HOST irqreturn_t dw_handle_msi_irq(struct pcie_port *pp); void dw_pcie_msi_init(struct pcie_port *pp); diff --git a/drivers/pci/dwc/pcie-hisi.c b/drivers/pci/dwc/pcie-hisi.c index e51acee0ddf3..a20179169e06 100644 --- a/drivers/pci/dwc/pcie-hisi.c +++ b/drivers/pci/dwc/pcie-hisi.c @@ -223,7 +223,7 @@ static int hisi_pcie_link_up(struct dw_pcie *pci) return hisi_pcie->soc_ops->hisi_pcie_link_up(hisi_pcie); } -static struct dw_pcie_host_ops hisi_pcie_host_ops = { +static const struct dw_pcie_host_ops hisi_pcie_host_ops = { .rd_own_conf = hisi_pcie_cfg_read, .wr_own_conf = hisi_pcie_cfg_write, }; @@ -268,7 +268,6 @@ static int hisi_pcie_probe(struct platform_device *pdev) struct dw_pcie *pci; struct hisi_pcie *hisi_pcie; struct resource *reg; - struct device_driver *driver; int ret; hisi_pcie = devm_kzalloc(dev, sizeof(*hisi_pcie), GFP_KERNEL); @@ -282,8 +281,6 @@ static int hisi_pcie_probe(struct platform_device *pdev) pci->dev = dev; pci->ops = &dw_pcie_ops; - driver = dev->driver; - hisi_pcie->pci = pci; hisi_pcie->soc_ops = of_device_get_match_data(dev); diff --git a/drivers/pci/dwc/pcie-kirin.c b/drivers/pci/dwc/pcie-kirin.c index 33fddb9f6739..dc3033cf3c19 100644 --- a/drivers/pci/dwc/pcie-kirin.c +++ b/drivers/pci/dwc/pcie-kirin.c @@ -430,9 +430,11 @@ static int kirin_pcie_establish_link(struct pcie_port *pp) return 0; } -static void kirin_pcie_host_init(struct pcie_port *pp) +static int kirin_pcie_host_init(struct pcie_port *pp) { kirin_pcie_establish_link(pp); + + return 0; } static struct dw_pcie_ops kirin_dw_pcie_ops = { @@ -441,7 +443,7 @@ static struct dw_pcie_ops kirin_dw_pcie_ops = { .link_up = kirin_pcie_link_up, }; -static struct dw_pcie_host_ops kirin_pcie_host_ops = { +static const struct dw_pcie_host_ops kirin_pcie_host_ops = { .rd_own_conf = kirin_pcie_rd_own_conf, .wr_own_conf = kirin_pcie_wr_own_conf, .host_init = kirin_pcie_host_init, diff --git a/drivers/pci/dwc/pcie-qcom.c b/drivers/pci/dwc/pcie-qcom.c index 68c5f2ab5bc8..ce7ba5b7552a 100644 --- a/drivers/pci/dwc/pcie-qcom.c +++ b/drivers/pci/dwc/pcie-qcom.c @@ -37,6 +37,20 @@ #include "pcie-designware.h" #define PCIE20_PARF_SYS_CTRL 0x00 +#define MST_WAKEUP_EN BIT(13) +#define SLV_WAKEUP_EN BIT(12) +#define MSTR_ACLK_CGC_DIS BIT(10) +#define SLV_ACLK_CGC_DIS BIT(9) +#define CORE_CLK_CGC_DIS BIT(6) +#define AUX_PWR_DET BIT(4) +#define L23_CLK_RMV_DIS BIT(2) +#define L1_CLK_RMV_DIS BIT(1) + +#define PCIE20_COMMAND_STATUS 0x04 +#define CMD_BME_VAL 0x4 +#define PCIE20_DEVICE_CONTROL2_STATUS2 0x98 +#define PCIE_CAP_CPL_TIMEOUT_DISABLE 0x10 + #define PCIE20_PARF_PHY_CTRL 0x40 #define PCIE20_PARF_PHY_REFCLK 0x4C #define PCIE20_PARF_DBI_BASE_ADDR 0x168 @@ -58,10 +72,22 @@ #define CFG_BRIDGE_SB_INIT BIT(0) #define PCIE20_CAP 0x70 +#define PCIE20_CAP_LINK_CAPABILITIES (PCIE20_CAP + 0xC) +#define PCIE20_CAP_ACTIVE_STATE_LINK_PM_SUPPORT (BIT(10) | BIT(11)) +#define PCIE20_CAP_LINK_1 (PCIE20_CAP + 0x14) +#define PCIE_CAP_LINK1_VAL 0x2FD7F + +#define PCIE20_PARF_Q2A_FLUSH 0x1AC + +#define PCIE20_MISC_CONTROL_1_REG 0x8BC +#define DBI_RO_WR_EN 1 #define PERST_DELAY_US 1000 -struct qcom_pcie_resources_v0 { +#define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE 0x358 +#define SLV_ADDR_SPACE_SZ 0x10000000 + +struct qcom_pcie_resources_2_1_0 { struct clk *iface_clk; struct clk *core_clk; struct clk *phy_clk; @@ -75,7 +101,7 @@ struct qcom_pcie_resources_v0 { struct regulator *vdda_refclk; }; -struct qcom_pcie_resources_v1 { +struct qcom_pcie_resources_1_0_0 { struct clk *iface; struct clk *aux; struct clk *master_bus; @@ -84,7 +110,7 @@ struct qcom_pcie_resources_v1 { struct regulator *vdda; }; -struct qcom_pcie_resources_v2 { +struct qcom_pcie_resources_2_3_2 { struct clk *aux_clk; struct clk *master_clk; struct clk *slave_clk; @@ -92,7 +118,7 @@ struct qcom_pcie_resources_v2 { struct clk *pipe_clk; }; -struct qcom_pcie_resources_v3 { +struct qcom_pcie_resources_2_4_0 { struct clk *aux_clk; struct clk *master_clk; struct clk *slave_clk; @@ -110,11 +136,21 @@ struct qcom_pcie_resources_v3 { struct reset_control *phy_ahb_reset; }; +struct qcom_pcie_resources_2_3_3 { + struct clk *iface; + struct clk *axi_m_clk; + struct clk *axi_s_clk; + struct clk *ahb_clk; + struct clk *aux_clk; + struct reset_control *rst[7]; +}; + union qcom_pcie_resources { - struct qcom_pcie_resources_v0 v0; - struct qcom_pcie_resources_v1 v1; - struct qcom_pcie_resources_v2 v2; - struct qcom_pcie_resources_v3 v3; + struct qcom_pcie_resources_1_0_0 v1_0_0; + struct qcom_pcie_resources_2_1_0 v2_1_0; + struct qcom_pcie_resources_2_3_2 v2_3_2; + struct qcom_pcie_resources_2_3_3 v2_3_3; + struct qcom_pcie_resources_2_4_0 v2_4_0; }; struct qcom_pcie; @@ -124,6 +160,7 @@ struct qcom_pcie_ops { int (*init)(struct qcom_pcie *pcie); int (*post_init)(struct qcom_pcie *pcie); void (*deinit)(struct qcom_pcie *pcie); + void (*post_deinit)(struct qcom_pcie *pcie); void (*ltssm_enable)(struct qcom_pcie *pcie); }; @@ -141,13 +178,13 @@ struct qcom_pcie { static void qcom_ep_reset_assert(struct qcom_pcie *pcie) { - gpiod_set_value(pcie->reset, 1); + gpiod_set_value_cansleep(pcie->reset, 1); usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); } static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) { - gpiod_set_value(pcie->reset, 0); + gpiod_set_value_cansleep(pcie->reset, 0); usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); } @@ -172,7 +209,7 @@ static int qcom_pcie_establish_link(struct qcom_pcie *pcie) return dw_pcie_wait_for_link(pci); } -static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie) +static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) { u32 val; @@ -182,9 +219,9 @@ static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie) writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); } -static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie) +static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v0 *res = &pcie->res.v0; + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; @@ -212,29 +249,29 @@ static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie) if (IS_ERR(res->phy_clk)) return PTR_ERR(res->phy_clk); - res->pci_reset = devm_reset_control_get(dev, "pci"); + res->pci_reset = devm_reset_control_get_exclusive(dev, "pci"); if (IS_ERR(res->pci_reset)) return PTR_ERR(res->pci_reset); - res->axi_reset = devm_reset_control_get(dev, "axi"); + res->axi_reset = devm_reset_control_get_exclusive(dev, "axi"); if (IS_ERR(res->axi_reset)) return PTR_ERR(res->axi_reset); - res->ahb_reset = devm_reset_control_get(dev, "ahb"); + res->ahb_reset = devm_reset_control_get_exclusive(dev, "ahb"); if (IS_ERR(res->ahb_reset)) return PTR_ERR(res->ahb_reset); - res->por_reset = devm_reset_control_get(dev, "por"); + res->por_reset = devm_reset_control_get_exclusive(dev, "por"); if (IS_ERR(res->por_reset)) return PTR_ERR(res->por_reset); - res->phy_reset = devm_reset_control_get(dev, "phy"); + res->phy_reset = devm_reset_control_get_exclusive(dev, "phy"); return PTR_ERR_OR_ZERO(res->phy_reset); } -static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie) +static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v0 *res = &pcie->res.v0; + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; reset_control_assert(res->pci_reset); reset_control_assert(res->axi_reset); @@ -249,9 +286,9 @@ static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie) regulator_disable(res->vdda_refclk); } -static int qcom_pcie_init_v0(struct qcom_pcie *pcie) +static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v0 *res = &pcie->res.v0; + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; u32 val; @@ -367,9 +404,9 @@ err_refclk: return ret; } -static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie) +static int qcom_pcie_get_resources_1_0_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; + struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; @@ -393,13 +430,13 @@ static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie) if (IS_ERR(res->slave_bus)) return PTR_ERR(res->slave_bus); - res->core = devm_reset_control_get(dev, "core"); + res->core = devm_reset_control_get_exclusive(dev, "core"); return PTR_ERR_OR_ZERO(res->core); } -static void qcom_pcie_deinit_v1(struct qcom_pcie *pcie) +static void qcom_pcie_deinit_1_0_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; + struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; reset_control_assert(res->core); clk_disable_unprepare(res->slave_bus); @@ -409,9 +446,9 @@ static void qcom_pcie_deinit_v1(struct qcom_pcie *pcie) regulator_disable(res->vdda); } -static int qcom_pcie_init_v1(struct qcom_pcie *pcie) +static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; + struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; int ret; @@ -477,7 +514,7 @@ err_res: return ret; } -static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie) +static void qcom_pcie_2_3_2_ltssm_enable(struct qcom_pcie *pcie) { u32 val; @@ -487,9 +524,9 @@ static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie) writel(val, pcie->parf + PCIE20_PARF_LTSSM); } -static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie) +static int qcom_pcie_get_resources_2_3_2(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; @@ -513,20 +550,26 @@ static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie) return PTR_ERR_OR_ZERO(res->pipe_clk); } -static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie) +static void qcom_pcie_deinit_2_3_2(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; - clk_disable_unprepare(res->pipe_clk); clk_disable_unprepare(res->slave_clk); clk_disable_unprepare(res->master_clk); clk_disable_unprepare(res->cfg_clk); clk_disable_unprepare(res->aux_clk); } -static int qcom_pcie_init_v2(struct qcom_pcie *pcie) +static void qcom_pcie_post_deinit_2_3_2(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; + + clk_disable_unprepare(res->pipe_clk); +} + +static int qcom_pcie_init_2_3_2(struct qcom_pcie *pcie) +{ + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; u32 val; @@ -589,9 +632,9 @@ err_cfg_clk: return ret; } -static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie) +static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; int ret; @@ -605,9 +648,9 @@ static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie) return 0; } -static int qcom_pcie_get_resources_v3(struct qcom_pcie *pcie) +static int qcom_pcie_get_resources_2_4_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v3 *res = &pcie->res.v3; + struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; @@ -623,60 +666,64 @@ static int qcom_pcie_get_resources_v3(struct qcom_pcie *pcie) if (IS_ERR(res->slave_clk)) return PTR_ERR(res->slave_clk); - res->axi_m_reset = devm_reset_control_get(dev, "axi_m"); + res->axi_m_reset = devm_reset_control_get_exclusive(dev, "axi_m"); if (IS_ERR(res->axi_m_reset)) return PTR_ERR(res->axi_m_reset); - res->axi_s_reset = devm_reset_control_get(dev, "axi_s"); + res->axi_s_reset = devm_reset_control_get_exclusive(dev, "axi_s"); if (IS_ERR(res->axi_s_reset)) return PTR_ERR(res->axi_s_reset); - res->pipe_reset = devm_reset_control_get(dev, "pipe"); + res->pipe_reset = devm_reset_control_get_exclusive(dev, "pipe"); if (IS_ERR(res->pipe_reset)) return PTR_ERR(res->pipe_reset); - res->axi_m_vmid_reset = devm_reset_control_get(dev, "axi_m_vmid"); + res->axi_m_vmid_reset = devm_reset_control_get_exclusive(dev, + "axi_m_vmid"); if (IS_ERR(res->axi_m_vmid_reset)) return PTR_ERR(res->axi_m_vmid_reset); - res->axi_s_xpu_reset = devm_reset_control_get(dev, "axi_s_xpu"); + res->axi_s_xpu_reset = devm_reset_control_get_exclusive(dev, + "axi_s_xpu"); if (IS_ERR(res->axi_s_xpu_reset)) return PTR_ERR(res->axi_s_xpu_reset); - res->parf_reset = devm_reset_control_get(dev, "parf"); + res->parf_reset = devm_reset_control_get_exclusive(dev, "parf"); if (IS_ERR(res->parf_reset)) return PTR_ERR(res->parf_reset); - res->phy_reset = devm_reset_control_get(dev, "phy"); + res->phy_reset = devm_reset_control_get_exclusive(dev, "phy"); if (IS_ERR(res->phy_reset)) return PTR_ERR(res->phy_reset); - res->axi_m_sticky_reset = devm_reset_control_get(dev, "axi_m_sticky"); + res->axi_m_sticky_reset = devm_reset_control_get_exclusive(dev, + "axi_m_sticky"); if (IS_ERR(res->axi_m_sticky_reset)) return PTR_ERR(res->axi_m_sticky_reset); - res->pipe_sticky_reset = devm_reset_control_get(dev, "pipe_sticky"); + res->pipe_sticky_reset = devm_reset_control_get_exclusive(dev, + "pipe_sticky"); if (IS_ERR(res->pipe_sticky_reset)) return PTR_ERR(res->pipe_sticky_reset); - res->pwr_reset = devm_reset_control_get(dev, "pwr"); + res->pwr_reset = devm_reset_control_get_exclusive(dev, "pwr"); if (IS_ERR(res->pwr_reset)) return PTR_ERR(res->pwr_reset); - res->ahb_reset = devm_reset_control_get(dev, "ahb"); + res->ahb_reset = devm_reset_control_get_exclusive(dev, "ahb"); if (IS_ERR(res->ahb_reset)) return PTR_ERR(res->ahb_reset); - res->phy_ahb_reset = devm_reset_control_get(dev, "phy_ahb"); + res->phy_ahb_reset = devm_reset_control_get_exclusive(dev, "phy_ahb"); if (IS_ERR(res->phy_ahb_reset)) return PTR_ERR(res->phy_ahb_reset); return 0; } -static void qcom_pcie_deinit_v3(struct qcom_pcie *pcie) +static void qcom_pcie_deinit_2_4_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v3 *res = &pcie->res.v3; + struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; reset_control_assert(res->axi_m_reset); reset_control_assert(res->axi_s_reset); @@ -692,9 +739,9 @@ static void qcom_pcie_deinit_v3(struct qcom_pcie *pcie) clk_disable_unprepare(res->slave_clk); } -static int qcom_pcie_init_v3(struct qcom_pcie *pcie) +static int qcom_pcie_init_2_4_0(struct qcom_pcie *pcie) { - struct qcom_pcie_resources_v3 *res = &pcie->res.v3; + struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; u32 val; @@ -884,6 +931,166 @@ err_rst_phy: return ret; } +static int qcom_pcie_get_resources_2_3_3(struct qcom_pcie *pcie) +{ + struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; + struct dw_pcie *pci = pcie->pci; + struct device *dev = pci->dev; + int i; + const char *rst_names[] = { "axi_m", "axi_s", "pipe", + "axi_m_sticky", "sticky", + "ahb", "sleep", }; + + res->iface = devm_clk_get(dev, "iface"); + if (IS_ERR(res->iface)) + return PTR_ERR(res->iface); + + res->axi_m_clk = devm_clk_get(dev, "axi_m"); + if (IS_ERR(res->axi_m_clk)) + return PTR_ERR(res->axi_m_clk); + + res->axi_s_clk = devm_clk_get(dev, "axi_s"); + if (IS_ERR(res->axi_s_clk)) + return PTR_ERR(res->axi_s_clk); + + res->ahb_clk = devm_clk_get(dev, "ahb"); + if (IS_ERR(res->ahb_clk)) + return PTR_ERR(res->ahb_clk); + + res->aux_clk = devm_clk_get(dev, "aux"); + if (IS_ERR(res->aux_clk)) + return PTR_ERR(res->aux_clk); + + for (i = 0; i < ARRAY_SIZE(rst_names); i++) { + res->rst[i] = devm_reset_control_get(dev, rst_names[i]); + if (IS_ERR(res->rst[i])) + return PTR_ERR(res->rst[i]); + } + + return 0; +} + +static void qcom_pcie_deinit_2_3_3(struct qcom_pcie *pcie) +{ + struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; + + clk_disable_unprepare(res->iface); + clk_disable_unprepare(res->axi_m_clk); + clk_disable_unprepare(res->axi_s_clk); + clk_disable_unprepare(res->ahb_clk); + clk_disable_unprepare(res->aux_clk); +} + +static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie) +{ + struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; + struct dw_pcie *pci = pcie->pci; + struct device *dev = pci->dev; + int i, ret; + u32 val; + + for (i = 0; i < ARRAY_SIZE(res->rst); i++) { + ret = reset_control_assert(res->rst[i]); + if (ret) { + dev_err(dev, "reset #%d assert failed (%d)\n", i, ret); + return ret; + } + } + + usleep_range(2000, 2500); + + for (i = 0; i < ARRAY_SIZE(res->rst); i++) { + ret = reset_control_deassert(res->rst[i]); + if (ret) { + dev_err(dev, "reset #%d deassert failed (%d)\n", i, + ret); + return ret; + } + } + + /* + * Don't have a way to see if the reset has completed. + * Wait for some time. + */ + usleep_range(2000, 2500); + + ret = clk_prepare_enable(res->iface); + if (ret) { + dev_err(dev, "cannot prepare/enable core clock\n"); + goto err_clk_iface; + } + + ret = clk_prepare_enable(res->axi_m_clk); + if (ret) { + dev_err(dev, "cannot prepare/enable core clock\n"); + goto err_clk_axi_m; + } + + ret = clk_prepare_enable(res->axi_s_clk); + if (ret) { + dev_err(dev, "cannot prepare/enable axi slave clock\n"); + goto err_clk_axi_s; + } + + ret = clk_prepare_enable(res->ahb_clk); + if (ret) { + dev_err(dev, "cannot prepare/enable ahb clock\n"); + goto err_clk_ahb; + } + + ret = clk_prepare_enable(res->aux_clk); + if (ret) { + dev_err(dev, "cannot prepare/enable aux clock\n"); + goto err_clk_aux; + } + + writel(SLV_ADDR_SPACE_SZ, + pcie->parf + PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE); + + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); + val &= ~BIT(0); + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); + + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); + + writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS + | SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS | + AUX_PWR_DET | L23_CLK_RMV_DIS | L1_CLK_RMV_DIS, + pcie->parf + PCIE20_PARF_SYS_CTRL); + writel(0, pcie->parf + PCIE20_PARF_Q2A_FLUSH); + + writel(CMD_BME_VAL, pci->dbi_base + PCIE20_COMMAND_STATUS); + writel(DBI_RO_WR_EN, pci->dbi_base + PCIE20_MISC_CONTROL_1_REG); + writel(PCIE_CAP_LINK1_VAL, pci->dbi_base + PCIE20_CAP_LINK_1); + + val = readl(pci->dbi_base + PCIE20_CAP_LINK_CAPABILITIES); + val &= ~PCIE20_CAP_ACTIVE_STATE_LINK_PM_SUPPORT; + writel(val, pci->dbi_base + PCIE20_CAP_LINK_CAPABILITIES); + + writel(PCIE_CAP_CPL_TIMEOUT_DISABLE, pci->dbi_base + + PCIE20_DEVICE_CONTROL2_STATUS2); + + return 0; + +err_clk_aux: + clk_disable_unprepare(res->ahb_clk); +err_clk_ahb: + clk_disable_unprepare(res->axi_s_clk); +err_clk_axi_s: + clk_disable_unprepare(res->axi_m_clk); +err_clk_axi_m: + clk_disable_unprepare(res->iface); +err_clk_iface: + /* + * Not checking for failure, will anyway return + * the original failure in 'ret'. + */ + for (i = 0; i < ARRAY_SIZE(res->rst); i++) + reset_control_assert(res->rst[i]); + + return ret; +} + static int qcom_pcie_link_up(struct dw_pcie *pci) { u16 val = readw(pci->dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA); @@ -891,7 +1098,7 @@ static int qcom_pcie_link_up(struct dw_pcie *pci) return !!(val & PCI_EXP_LNKSTA_DLLLA); } -static void qcom_pcie_host_init(struct pcie_port *pp) +static int qcom_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct qcom_pcie *pcie = to_qcom_pcie(pci); @@ -901,14 +1108,17 @@ static void qcom_pcie_host_init(struct pcie_port *pp) ret = pcie->ops->init(pcie); if (ret) - goto err_deinit; + return ret; ret = phy_power_on(pcie->phy); if (ret) goto err_deinit; - if (pcie->ops->post_init) - pcie->ops->post_init(pcie); + if (pcie->ops->post_init) { + ret = pcie->ops->post_init(pcie); + if (ret) + goto err_disable_phy; + } dw_pcie_setup_rc(pp); @@ -921,12 +1131,17 @@ static void qcom_pcie_host_init(struct pcie_port *pp) if (ret) goto err; - return; + return 0; err: qcom_ep_reset_assert(pcie); + if (pcie->ops->post_deinit) + pcie->ops->post_deinit(pcie); +err_disable_phy: phy_power_off(pcie->phy); err_deinit: pcie->ops->deinit(pcie); + + return ret; } static int qcom_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, @@ -950,37 +1165,50 @@ static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { .rd_own_conf = qcom_pcie_rd_own_conf, }; -static const struct qcom_pcie_ops ops_v0 = { - .get_resources = qcom_pcie_get_resources_v0, - .init = qcom_pcie_init_v0, - .deinit = qcom_pcie_deinit_v0, - .ltssm_enable = qcom_pcie_v0_v1_ltssm_enable, +/* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ +static const struct qcom_pcie_ops ops_2_1_0 = { + .get_resources = qcom_pcie_get_resources_2_1_0, + .init = qcom_pcie_init_2_1_0, + .deinit = qcom_pcie_deinit_2_1_0, + .ltssm_enable = qcom_pcie_2_1_0_ltssm_enable, }; -static const struct qcom_pcie_ops ops_v1 = { - .get_resources = qcom_pcie_get_resources_v1, - .init = qcom_pcie_init_v1, - .deinit = qcom_pcie_deinit_v1, - .ltssm_enable = qcom_pcie_v0_v1_ltssm_enable, +/* Qcom IP rev.: 1.0.0 Synopsys IP rev.: 4.11a */ +static const struct qcom_pcie_ops ops_1_0_0 = { + .get_resources = qcom_pcie_get_resources_1_0_0, + .init = qcom_pcie_init_1_0_0, + .deinit = qcom_pcie_deinit_1_0_0, + .ltssm_enable = qcom_pcie_2_1_0_ltssm_enable, }; -static const struct qcom_pcie_ops ops_v2 = { - .get_resources = qcom_pcie_get_resources_v2, - .init = qcom_pcie_init_v2, - .post_init = qcom_pcie_post_init_v2, - .deinit = qcom_pcie_deinit_v2, - .ltssm_enable = qcom_pcie_v2_ltssm_enable, +/* Qcom IP rev.: 2.3.2 Synopsys IP rev.: 4.21a */ +static const struct qcom_pcie_ops ops_2_3_2 = { + .get_resources = qcom_pcie_get_resources_2_3_2, + .init = qcom_pcie_init_2_3_2, + .post_init = qcom_pcie_post_init_2_3_2, + .deinit = qcom_pcie_deinit_2_3_2, + .post_deinit = qcom_pcie_post_deinit_2_3_2, + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, }; -static const struct dw_pcie_ops dw_pcie_ops = { - .link_up = qcom_pcie_link_up, +/* Qcom IP rev.: 2.4.0 Synopsys IP rev.: 4.20a */ +static const struct qcom_pcie_ops ops_2_4_0 = { + .get_resources = qcom_pcie_get_resources_2_4_0, + .init = qcom_pcie_init_2_4_0, + .deinit = qcom_pcie_deinit_2_4_0, + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, }; -static const struct qcom_pcie_ops ops_v3 = { - .get_resources = qcom_pcie_get_resources_v3, - .init = qcom_pcie_init_v3, - .deinit = qcom_pcie_deinit_v3, - .ltssm_enable = qcom_pcie_v2_ltssm_enable, +/* Qcom IP rev.: 2.3.3 Synopsys IP rev.: 4.30a */ +static const struct qcom_pcie_ops ops_2_3_3 = { + .get_resources = qcom_pcie_get_resources_2_3_3, + .init = qcom_pcie_init_2_3_3, + .deinit = qcom_pcie_deinit_2_3_3, + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, +}; + +static const struct dw_pcie_ops dw_pcie_ops = { + .link_up = qcom_pcie_link_up, }; static int qcom_pcie_probe(struct platform_device *pdev) @@ -1069,11 +1297,12 @@ static int qcom_pcie_probe(struct platform_device *pdev) } static const struct of_device_id qcom_pcie_match[] = { - { .compatible = "qcom,pcie-ipq8064", .data = &ops_v0 }, - { .compatible = "qcom,pcie-apq8064", .data = &ops_v0 }, - { .compatible = "qcom,pcie-apq8084", .data = &ops_v1 }, - { .compatible = "qcom,pcie-msm8996", .data = &ops_v2 }, - { .compatible = "qcom,pcie-ipq4019", .data = &ops_v3 }, + { .compatible = "qcom,pcie-apq8084", .data = &ops_1_0_0 }, + { .compatible = "qcom,pcie-ipq8064", .data = &ops_2_1_0 }, + { .compatible = "qcom,pcie-apq8064", .data = &ops_2_1_0 }, + { .compatible = "qcom,pcie-msm8996", .data = &ops_2_3_2 }, + { .compatible = "qcom,pcie-ipq8074", .data = &ops_2_3_3 }, + { .compatible = "qcom,pcie-ipq4019", .data = &ops_2_4_0 }, { } }; diff --git a/drivers/pci/dwc/pcie-spear13xx.c b/drivers/pci/dwc/pcie-spear13xx.c index 80897291e0fb..709189d23b31 100644 --- a/drivers/pci/dwc/pcie-spear13xx.c +++ b/drivers/pci/dwc/pcie-spear13xx.c @@ -177,13 +177,15 @@ static int spear13xx_pcie_link_up(struct dw_pcie *pci) return 0; } -static void spear13xx_pcie_host_init(struct pcie_port *pp) +static int spear13xx_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci); spear13xx_pcie_establish_link(spear13xx_pcie); spear13xx_pcie_enable_interrupts(spear13xx_pcie); + + return 0; } static const struct dw_pcie_host_ops spear13xx_pcie_host_ops = { @@ -199,9 +201,9 @@ static int spear13xx_add_pcie_port(struct spear13xx_pcie *spear13xx_pcie, int ret; pp->irq = platform_get_irq(pdev, 0); - if (!pp->irq) { + if (pp->irq < 0) { dev_err(dev, "failed to get irq\n"); - return -ENODEV; + return pp->irq; } ret = devm_request_irq(dev, pp->irq, spear13xx_pcie_irq_handler, IRQF_SHARED | IRQF_NO_THREAD, diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 53fff8030337..4ddc6e8f9fe7 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -54,6 +54,8 @@ static struct workqueue_struct *kpcitest_workqueue; struct pci_epf_test { void *reg[6]; struct pci_epf *epf; + enum pci_barno test_reg_bar; + bool linkup_notifier; struct delayed_work cmd_handler; }; @@ -74,7 +76,12 @@ static struct pci_epf_header test_header = { .interrupt_pin = PCI_INTERRUPT_INTA, }; -static int bar_size[] = { 512, 1024, 16384, 131072, 1048576 }; +struct pci_epf_test_data { + enum pci_barno test_reg_bar; + bool linkup_notifier; +}; + +static int bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; static int pci_epf_test_copy(struct pci_epf_test *epf_test) { @@ -86,7 +93,8 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test) struct pci_epf *epf = epf_test->epf; struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; - struct pci_epf_test_reg *reg = epf_test->reg[0]; + enum pci_barno test_reg_bar = epf_test->test_reg_bar; + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size); if (!src_addr) { @@ -145,7 +153,8 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) struct pci_epf *epf = epf_test->epf; struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; - struct pci_epf_test_reg *reg = epf_test->reg[0]; + enum pci_barno test_reg_bar = epf_test->test_reg_bar; + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); if (!src_addr) { @@ -195,7 +204,8 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) struct pci_epf *epf = epf_test->epf; struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; - struct pci_epf_test_reg *reg = epf_test->reg[0]; + enum pci_barno test_reg_bar = epf_test->test_reg_bar; + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); if (!dst_addr) { @@ -247,7 +257,8 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test) u8 msi_count; struct pci_epf *epf = epf_test->epf; struct pci_epc *epc = epf->epc; - struct pci_epf_test_reg *reg = epf_test->reg[0]; + enum pci_barno test_reg_bar = epf_test->test_reg_bar; + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; reg->status |= STATUS_IRQ_RAISED; msi_count = pci_epc_get_msi(epc); @@ -263,22 +274,28 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) int ret; u8 irq; u8 msi_count; + u32 command; struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test, cmd_handler.work); struct pci_epf *epf = epf_test->epf; struct pci_epc *epc = epf->epc; - struct pci_epf_test_reg *reg = epf_test->reg[0]; + enum pci_barno test_reg_bar = epf_test->test_reg_bar; + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; - if (!reg->command) + command = reg->command; + if (!command) goto reset_handler; - if (reg->command & COMMAND_RAISE_LEGACY_IRQ) { + reg->command = 0; + reg->status = 0; + + if (command & COMMAND_RAISE_LEGACY_IRQ) { reg->status = STATUS_IRQ_RAISED; pci_epc_raise_irq(epc, PCI_EPC_IRQ_LEGACY, 0); goto reset_handler; } - if (reg->command & COMMAND_WRITE) { + if (command & COMMAND_WRITE) { ret = pci_epf_test_write(epf_test); if (ret) reg->status |= STATUS_WRITE_FAIL; @@ -288,7 +305,7 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) goto reset_handler; } - if (reg->command & COMMAND_READ) { + if (command & COMMAND_READ) { ret = pci_epf_test_read(epf_test); if (!ret) reg->status |= STATUS_READ_SUCCESS; @@ -298,7 +315,7 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) goto reset_handler; } - if (reg->command & COMMAND_COPY) { + if (command & COMMAND_COPY) { ret = pci_epf_test_copy(epf_test); if (!ret) reg->status |= STATUS_COPY_SUCCESS; @@ -308,9 +325,9 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) goto reset_handler; } - if (reg->command & COMMAND_RAISE_MSI_IRQ) { + if (command & COMMAND_RAISE_MSI_IRQ) { msi_count = pci_epc_get_msi(epc); - irq = (reg->command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT; + irq = (command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT; if (irq > msi_count || msi_count <= 0) goto reset_handler; reg->status = STATUS_IRQ_RAISED; @@ -319,8 +336,6 @@ static void pci_epf_test_cmd_handler(struct work_struct *work) } reset_handler: - reg->command = 0; - queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler, msecs_to_jiffies(1)); } @@ -358,6 +373,7 @@ static int pci_epf_test_set_bar(struct pci_epf *epf) struct pci_epc *epc = epf->epc; struct device *dev = &epf->dev; struct pci_epf_test *epf_test = epf_get_drvdata(epf); + enum pci_barno test_reg_bar = epf_test->test_reg_bar; flags = PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_32; if (sizeof(dma_addr_t) == 0x8) @@ -370,7 +386,7 @@ static int pci_epf_test_set_bar(struct pci_epf *epf) if (ret) { pci_epf_free_space(epf, epf_test->reg[bar], bar); dev_err(dev, "failed to set BAR%d\n", bar); - if (bar == BAR_0) + if (bar == test_reg_bar) return ret; } } @@ -384,17 +400,20 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf) struct device *dev = &epf->dev; void *base; int bar; + enum pci_barno test_reg_bar = epf_test->test_reg_bar; base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), - BAR_0); + test_reg_bar); if (!base) { dev_err(dev, "failed to allocated register space\n"); return -ENOMEM; } - epf_test->reg[0] = base; + epf_test->reg[test_reg_bar] = base; - for (bar = BAR_1; bar <= BAR_5; bar++) { - base = pci_epf_alloc_space(epf, bar_size[bar - 1], bar); + for (bar = BAR_0; bar <= BAR_5; bar++) { + if (bar == test_reg_bar) + continue; + base = pci_epf_alloc_space(epf, bar_size[bar], bar); if (!base) dev_err(dev, "failed to allocate space for BAR%d\n", bar); @@ -407,6 +426,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf) static int pci_epf_test_bind(struct pci_epf *epf) { int ret; + struct pci_epf_test *epf_test = epf_get_drvdata(epf); struct pci_epf_header *header = epf->header; struct pci_epc *epc = epf->epc; struct device *dev = &epf->dev; @@ -432,13 +452,34 @@ static int pci_epf_test_bind(struct pci_epf *epf) if (ret) return ret; + if (!epf_test->linkup_notifier) + queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); + return 0; } +static const struct pci_epf_device_id pci_epf_test_ids[] = { + { + .name = "pci_epf_test", + }, + {}, +}; + static int pci_epf_test_probe(struct pci_epf *epf) { struct pci_epf_test *epf_test; struct device *dev = &epf->dev; + const struct pci_epf_device_id *match; + struct pci_epf_test_data *data; + enum pci_barno test_reg_bar = BAR_0; + bool linkup_notifier = true; + + match = pci_epf_match_device(pci_epf_test_ids, epf); + data = (struct pci_epf_test_data *)match->driver_data; + if (data) { + test_reg_bar = data->test_reg_bar; + linkup_notifier = data->linkup_notifier; + } epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL); if (!epf_test) @@ -446,6 +487,8 @@ static int pci_epf_test_probe(struct pci_epf *epf) epf->header = &test_header; epf_test->epf = epf; + epf_test->test_reg_bar = test_reg_bar; + epf_test->linkup_notifier = linkup_notifier; INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler); @@ -453,31 +496,15 @@ static int pci_epf_test_probe(struct pci_epf *epf) return 0; } -static int pci_epf_test_remove(struct pci_epf *epf) -{ - struct pci_epf_test *epf_test = epf_get_drvdata(epf); - - kfree(epf_test); - return 0; -} - static struct pci_epf_ops ops = { .unbind = pci_epf_test_unbind, .bind = pci_epf_test_bind, .linkup = pci_epf_test_linkup, }; -static const struct pci_epf_device_id pci_epf_test_ids[] = { - { - .name = "pci_epf_test", - }, - {}, -}; - static struct pci_epf_driver test_driver = { .driver.name = "pci_epf_test", .probe = pci_epf_test_probe, - .remove = pci_epf_test_remove, .id_table = pci_epf_test_ids, .ops = &ops, .owner = THIS_MODULE, diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c index caa7be10e473..42c2a1156325 100644 --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c @@ -21,6 +21,7 @@ #include <linux/dma-mapping.h> #include <linux/slab.h> #include <linux/module.h> +#include <linux/of_device.h> #include <linux/pci-epc.h> #include <linux/pci-epf.h> @@ -370,6 +371,7 @@ EXPORT_SYMBOL_GPL(pci_epc_write_header); int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf) { unsigned long flags; + struct device *dev = epc->dev.parent; if (epf->epc) return -EBUSY; @@ -381,8 +383,12 @@ int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf) return -EINVAL; epf->epc = epc; - dma_set_coherent_mask(&epf->dev, epc->dev.coherent_dma_mask); - epf->dev.dma_mask = epc->dev.dma_mask; + if (dev->of_node) { + of_dma_configure(&epf->dev, dev->of_node); + } else { + dma_set_coherent_mask(&epf->dev, epc->dev.coherent_dma_mask); + epf->dev.dma_mask = epc->dev.dma_mask; + } spin_lock_irqsave(&epc->lock, flags); list_add_tail(&epf->list, &epc->pci_epf); @@ -500,6 +506,7 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops, dma_set_coherent_mask(&epc->dev, dev->coherent_dma_mask); epc->dev.class = pci_epc_class; epc->dev.dma_mask = dev->dma_mask; + epc->dev.parent = dev; epc->ops = ops; ret = dev_set_name(&epc->dev, "%s", dev_name(dev)); diff --git a/drivers/pci/endpoint/pci-epc-mem.c b/drivers/pci/endpoint/pci-epc-mem.c index 3a94cc1caf22..83b7d5d3fc3e 100644 --- a/drivers/pci/endpoint/pci-epc-mem.c +++ b/drivers/pci/endpoint/pci-epc-mem.c @@ -24,21 +24,54 @@ #include <linux/pci-epc.h> /** - * pci_epc_mem_init() - initialize the pci_epc_mem structure + * pci_epc_mem_get_order() - determine the allocation order of a memory size + * @mem: address space of the endpoint controller + * @size: the size for which to get the order + * + * Reimplement get_order() for mem->page_size since the generic get_order + * always gets order with a constant PAGE_SIZE. + */ +static int pci_epc_mem_get_order(struct pci_epc_mem *mem, size_t size) +{ + int order; + unsigned int page_shift = ilog2(mem->page_size); + + size--; + size >>= page_shift; +#if BITS_PER_LONG == 32 + order = fls(size); +#else + order = fls64(size); +#endif + return order; +} + +/** + * __pci_epc_mem_init() - initialize the pci_epc_mem structure * @epc: the EPC device that invoked pci_epc_mem_init * @phys_base: the physical address of the base * @size: the size of the address space + * @page_size: size of each page * * Invoke to initialize the pci_epc_mem structure used by the * endpoint functions to allocate mapped PCI address. */ -int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size) +int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size, + size_t page_size) { int ret; struct pci_epc_mem *mem; unsigned long *bitmap; - int pages = size >> PAGE_SHIFT; - int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); + unsigned int page_shift; + int pages; + int bitmap_size; + + if (page_size < PAGE_SIZE) + page_size = PAGE_SIZE; + + page_shift = ilog2(page_size); + pages = size >> page_shift; + bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) { @@ -54,6 +87,7 @@ int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size) mem->bitmap = bitmap; mem->phys_base = phys_base; + mem->page_size = page_size; mem->pages = pages; mem->size = size; @@ -67,7 +101,7 @@ err_mem: err: return ret; } -EXPORT_SYMBOL_GPL(pci_epc_mem_init); +EXPORT_SYMBOL_GPL(__pci_epc_mem_init); /** * pci_epc_mem_exit() - cleanup the pci_epc_mem structure @@ -101,13 +135,17 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, int pageno; void __iomem *virt_addr; struct pci_epc_mem *mem = epc->mem; - int order = get_order(size); + unsigned int page_shift = ilog2(mem->page_size); + int order; + + size = ALIGN(size, mem->page_size); + order = pci_epc_mem_get_order(mem, size); pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); if (pageno < 0) return NULL; - *phys_addr = mem->phys_base + (pageno << PAGE_SHIFT); + *phys_addr = mem->phys_base + (pageno << page_shift); virt_addr = ioremap(*phys_addr, size); if (!virt_addr) bitmap_release_region(mem->bitmap, pageno, order); @@ -129,11 +167,14 @@ void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr, void __iomem *virt_addr, size_t size) { int pageno; - int order = get_order(size); struct pci_epc_mem *mem = epc->mem; + unsigned int page_shift = ilog2(mem->page_size); + int order; iounmap(virt_addr); - pageno = (phys_addr - mem->phys_base) >> PAGE_SHIFT; + pageno = (phys_addr - mem->phys_base) >> page_shift; + size = ALIGN(size, mem->page_size); + order = pci_epc_mem_get_order(mem, size); bitmap_release_region(mem->bitmap, pageno, order); } EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr); diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c index 6877d6a5bcc9..ae1611a62808 100644 --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c @@ -27,7 +27,7 @@ #include <linux/pci-ep-cfs.h> static struct bus_type pci_epf_bus_type; -static struct device_type pci_epf_type; +static const struct device_type pci_epf_type; /** * pci_epf_linkup() - Notify the function driver that EPC device has @@ -267,6 +267,22 @@ err_ret: } EXPORT_SYMBOL_GPL(pci_epf_create); +const struct pci_epf_device_id * +pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf) +{ + if (!id || !epf) + return NULL; + + while (*id->name) { + if (strcmp(epf->name, id->name) == 0) + return id; + id++; + } + + return NULL; +} +EXPORT_SYMBOL_GPL(pci_epf_match_device); + static void pci_epf_dev_release(struct device *dev) { struct pci_epf *epf = to_pci_epf(dev); @@ -275,7 +291,7 @@ static void pci_epf_dev_release(struct device *dev) kfree(epf); } -static struct device_type pci_epf_type = { +static const struct device_type pci_epf_type = { .release = pci_epf_dev_release, }; @@ -317,11 +333,12 @@ static int pci_epf_device_probe(struct device *dev) static int pci_epf_device_remove(struct device *dev) { - int ret; + int ret = 0; struct pci_epf *epf = to_pci_epf(dev); struct pci_epf_driver *driver = to_pci_epf_driver(dev->driver); - ret = driver->remove(epf); + if (driver->remove) + ret = driver->remove(epf); epf->driver = NULL; return ret; diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig index 89d61c2cbfaa..b868803792d8 100644 --- a/drivers/pci/host/Kconfig +++ b/drivers/pci/host/Kconfig @@ -71,7 +71,7 @@ config PCI_HOST_GENERIC config PCIE_XILINX bool "Xilinx AXI PCIe host bridge support" - depends on ARCH_ZYNQ || MICROBLAZE + depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) help Say 'Y' here if you want kernel to support the Xilinx AXI PCIe Host Bridge driver. @@ -182,14 +182,13 @@ config PCIE_ROCKCHIP config PCIE_MEDIATEK bool "MediaTek PCIe controller" - depends on ARM && (ARCH_MEDIATEK || COMPILE_TEST) + depends on (ARM || ARM64) && (ARCH_MEDIATEK || COMPILE_TEST) depends on OF depends on PCI select PCIEPORTBUS help Say Y here if you want to enable PCIe controller support on - MT7623 series SoCs. There is one single root complex with 3 root - ports available. Each port supports Gen2 lane x1. + MediaTek SoCs. config PCIE_TANGO_SMP8759 bool "Tango SMP8759 PCIe controller (DANGEROUS)" diff --git a/drivers/pci/host/pci-aardvark.c b/drivers/pci/host/pci-aardvark.c index 5fb9b620ac78..89f4e3d072d7 100644 --- a/drivers/pci/host/pci-aardvark.c +++ b/drivers/pci/host/pci-aardvark.c @@ -191,7 +191,6 @@ #define LINK_WAIT_USLEEP_MIN 90000 #define LINK_WAIT_USLEEP_MAX 100000 -#define LEGACY_IRQ_NUM 4 #define MSI_IRQ_NUM 32 struct advk_pcie { @@ -729,7 +728,7 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie) irq_chip->irq_unmask = advk_pcie_irq_unmask; pcie->irq_domain = - irq_domain_add_linear(pcie_intc_node, LEGACY_IRQ_NUM, + irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, &advk_pcie_irq_domain_ops, pcie); if (!pcie->irq_domain) { dev_err(dev, "Failed to get a INTx IRQ domain\n"); @@ -786,7 +785,7 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie) advk_pcie_handle_msi(pcie); /* Process legacy interrupts */ - for (i = 0; i < LEGACY_IRQ_NUM; i++) { + for (i = 0; i < PCI_NUM_INTX; i++) { if (!(status & PCIE_ISR0_INTX_ASSERT(i))) continue; diff --git a/drivers/pci/host/pci-ftpci100.c b/drivers/pci/host/pci-ftpci100.c index 5162dffc102b..96028f01bc90 100644 --- a/drivers/pci/host/pci-ftpci100.c +++ b/drivers/pci/host/pci-ftpci100.c @@ -350,12 +350,12 @@ static int faraday_pci_setup_cascaded_irq(struct faraday_pci *p) /* All PCI IRQs cascade off this one */ irq = of_irq_get(intc, 0); - if (!irq) { + if (irq <= 0) { dev_err(p->dev, "failed to get parent IRQ\n"); - return -EINVAL; + return irq ?: -EINVAL; } - p->irqdomain = irq_domain_add_linear(intc, 4, + p->irqdomain = irq_domain_add_linear(intc, PCI_NUM_INTX, &faraday_pci_irqdomain_ops, p); if (!p->irqdomain) { dev_err(p->dev, "failed to create Gemini PCI IRQ domain\n"); diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c index aba041438566..0fe3ea164ee5 100644 --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -50,6 +50,7 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/pci.h> +#include <linux/delay.h> #include <linux/semaphore.h> #include <linux/irqdomain.h> #include <asm/irqdomain.h> @@ -1113,7 +1114,12 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) goto free_int_desc; } - wait_for_completion(&comp.comp_pkt.host_event); + /* + * Since this function is called with IRQ locks held, can't + * do normal wait for completion; instead poll. + */ + while (!try_wait_for_completion(&comp.comp_pkt.host_event)) + udelay(100); if (comp.comp_pkt.completion_status < 0) { dev_err(&hbus->hdev->device, diff --git a/drivers/pci/host/pci-mvebu.c b/drivers/pci/host/pci-mvebu.c index f353a6eb2f01..8d88f19dc171 100644 --- a/drivers/pci/host/pci-mvebu.c +++ b/drivers/pci/host/pci-mvebu.c @@ -1054,8 +1054,8 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie, port->pcie = pcie; if (of_property_read_u32(child, "marvell,pcie-port", &port->port)) { - dev_warn(dev, "ignoring %s, missing pcie-port property\n", - of_node_full_name(child)); + dev_warn(dev, "ignoring %pOF, missing pcie-port property\n", + child); goto skip; } @@ -1106,8 +1106,8 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie, } if (flags & OF_GPIO_ACTIVE_LOW) { - dev_info(dev, "%s: reset gpio is active low\n", - of_node_full_name(child)); + dev_info(dev, "%pOF: reset gpio is active low\n", + child); gpio_flags = GPIOF_ACTIVE_LOW | GPIOF_OUT_INIT_LOW; } else { @@ -1186,8 +1186,7 @@ static int mvebu_pcie_powerup(struct mvebu_pcie_port *port) */ static void mvebu_pcie_powerdown(struct mvebu_pcie_port *port) { - if (port->reset_gpio) - gpiod_set_value_cansleep(port->reset_gpio, 1); + gpiod_set_value_cansleep(port->reset_gpio, 1); clk_disable_unprepare(port->clk); } diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c index b3722b7709df..9c40da54f88a 100644 --- a/drivers/pci/host/pci-tegra.c +++ b/drivers/pci/host/pci-tegra.c @@ -1147,15 +1147,15 @@ static int tegra_pcie_resets_get(struct tegra_pcie *pcie) { struct device *dev = pcie->dev; - pcie->pex_rst = devm_reset_control_get(dev, "pex"); + pcie->pex_rst = devm_reset_control_get_exclusive(dev, "pex"); if (IS_ERR(pcie->pex_rst)) return PTR_ERR(pcie->pex_rst); - pcie->afi_rst = devm_reset_control_get(dev, "afi"); + pcie->afi_rst = devm_reset_control_get_exclusive(dev, "afi"); if (IS_ERR(pcie->afi_rst)) return PTR_ERR(pcie->afi_rst); - pcie->pcie_xrst = devm_reset_control_get(dev, "pcie_x"); + pcie->pcie_xrst = devm_reset_control_get_exclusive(dev, "pcie_x"); if (IS_ERR(pcie->pcie_xrst)) return PTR_ERR(pcie->pcie_xrst); @@ -1703,8 +1703,7 @@ static int tegra_pcie_get_legacy_regulators(struct tegra_pcie *pcie) pcie->num_supplies = 2; if (pcie->num_supplies == 0) { - dev_err(dev, "device %s not supported in legacy mode\n", - np->full_name); + dev_err(dev, "device %pOF not supported in legacy mode\n", np); return -ENODEV; } diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c index f1b633bce525..1f42a202b021 100644 --- a/drivers/pci/host/pci-xgene-msi.c +++ b/drivers/pci/host/pci-xgene-msi.c @@ -489,7 +489,7 @@ static int xgene_msi_probe(struct platform_device *pdev) if (virt_msir < 0) { dev_err(&pdev->dev, "Cannot translate IRQ index %d\n", irq_index); - rc = -EINVAL; + rc = virt_msir; goto error; } xgene_msi->msi_groups[irq_index].gic_irq = virt_msir; diff --git a/drivers/pci/host/pci-xgene.c b/drivers/pci/host/pci-xgene.c index bd897479a215..087645116ecb 100644 --- a/drivers/pci/host/pci-xgene.c +++ b/drivers/pci/host/pci-xgene.c @@ -61,7 +61,7 @@ #define SZ_1T (SZ_1G*1024ULL) #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) -#define ROOT_CAP_AND_CTRL 0x5C +#define XGENE_V1_PCI_EXP_CAP 0x40 /* PCIe IP version */ #define XGENE_PCIE_IP_VER_UNKN 0 @@ -160,7 +160,7 @@ static bool xgene_pcie_hide_rc_bars(struct pci_bus *bus, int offset) } static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, - int offset) + int offset) { if ((pci_is_root_bus(bus) && devfn != 0) || xgene_pcie_hide_rc_bars(bus, offset)) @@ -189,7 +189,7 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, * Avoid this by not claiming to support CRS. */ if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) && - ((where & ~0x3) == ROOT_CAP_AND_CTRL)) + ((where & ~0x3) == XGENE_V1_PCI_EXP_CAP + PCI_EXP_RTCTL)) *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); if (size <= 2) @@ -265,12 +265,12 @@ static int xgene_v1_pcie_ecam_init(struct pci_config_window *cfg) } struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { - .bus_shift = 16, - .init = xgene_v1_pcie_ecam_init, - .pci_ops = { - .map_bus = xgene_pcie_map_bus, - .read = xgene_pcie_config_read32, - .write = pci_generic_config_write, + .bus_shift = 16, + .init = xgene_v1_pcie_ecam_init, + .pci_ops = { + .map_bus = xgene_pcie_map_bus, + .read = xgene_pcie_config_read32, + .write = pci_generic_config_write, } }; @@ -280,12 +280,12 @@ static int xgene_v2_pcie_ecam_init(struct pci_config_window *cfg) } struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { - .bus_shift = 16, - .init = xgene_v2_pcie_ecam_init, - .pci_ops = { - .map_bus = xgene_pcie_map_bus, - .read = xgene_pcie_config_read32, - .write = pci_generic_config_write, + .bus_shift = 16, + .init = xgene_v2_pcie_ecam_init, + .pci_ops = { + .map_bus = xgene_pcie_map_bus, + .read = xgene_pcie_config_read32, + .write = pci_generic_config_write, } }; #endif @@ -318,7 +318,7 @@ static u64 xgene_pcie_set_ib_mask(struct xgene_pcie_port *port, u32 addr, } static void xgene_pcie_linkup(struct xgene_pcie_port *port, - u32 *lanes, u32 *speed) + u32 *lanes, u32 *speed) { u32 val32; @@ -593,8 +593,7 @@ static void xgene_pcie_clear_config(struct xgene_pcie_port *port) xgene_pcie_writel(port, i, 0); } -static int xgene_pcie_setup(struct xgene_pcie_port *port, - struct list_head *res, +static int xgene_pcie_setup(struct xgene_pcie_port *port, struct list_head *res, resource_size_t io_base) { struct device *dev = port->dev; @@ -706,9 +705,9 @@ static const struct of_device_id xgene_pcie_match_table[] = { static struct platform_driver xgene_pcie_driver = { .driver = { - .name = "xgene-pcie", - .of_match_table = of_match_ptr(xgene_pcie_match_table), - .suppress_bind_attrs = true, + .name = "xgene-pcie", + .of_match_table = of_match_ptr(xgene_pcie_match_table), + .suppress_bind_attrs = true, }, .probe = xgene_pcie_probe_bridge, }; diff --git a/drivers/pci/host/pcie-altera-msi.c b/drivers/pci/host/pcie-altera-msi.c index 4e5d628e8cd4..d8141f4865de 100644 --- a/drivers/pci/host/pcie-altera-msi.c +++ b/drivers/pci/host/pcie-altera-msi.c @@ -64,13 +64,11 @@ static void altera_msi_isr(struct irq_desc *desc) struct irq_chip *chip = irq_desc_get_chip(desc); struct altera_msi *msi; unsigned long status; - u32 num_of_vectors; u32 bit; u32 virq; chained_irq_enter(chip, desc); msi = irq_desc_get_handler_data(desc); - num_of_vectors = msi->num_of_vectors; while ((status = msi_readl(msi, MSI_STATUS)) != 0) { for_each_set_bit(bit, &status, msi->num_of_vectors) { @@ -267,9 +265,9 @@ static int altera_msi_probe(struct platform_device *pdev) return ret; msi->irq = platform_get_irq(pdev, 0); - if (msi->irq <= 0) { + if (msi->irq < 0) { dev_err(&pdev->dev, "failed to map IRQ: %d\n", msi->irq); - ret = -ENODEV; + ret = msi->irq; goto err; } diff --git a/drivers/pci/host/pcie-altera.c b/drivers/pci/host/pcie-altera.c index 4ea4f8f5dc77..b468b8cccf8d 100644 --- a/drivers/pci/host/pcie-altera.c +++ b/drivers/pci/host/pcie-altera.c @@ -76,8 +76,6 @@ #define LINK_UP_TIMEOUT HZ #define LINK_RETRAIN_TIMEOUT HZ -#define INTX_NUM 4 - #define DWORD_MASK 3 struct altera_pcie { @@ -464,6 +462,7 @@ static int altera_pcie_intx_map(struct irq_domain *domain, unsigned int irq, static const struct irq_domain_ops intx_domain_ops = { .map = altera_pcie_intx_map, + .xlate = pci_irqd_intx_xlate, }; static void altera_pcie_isr(struct irq_desc *desc) @@ -481,11 +480,11 @@ static void altera_pcie_isr(struct irq_desc *desc) while ((status = cra_readl(pcie, P2A_INT_STATUS) & P2A_INT_STS_ALL) != 0) { - for_each_set_bit(bit, &status, INTX_NUM) { + for_each_set_bit(bit, &status, PCI_NUM_INTX) { /* clear interrupts */ cra_writel(pcie, 1 << bit, P2A_INT_STATUS); - virq = irq_find_mapping(pcie->irq_domain, bit + 1); + virq = irq_find_mapping(pcie->irq_domain, bit); if (virq) generic_handle_irq(virq); else @@ -536,7 +535,7 @@ static int altera_pcie_init_irq_domain(struct altera_pcie *pcie) struct device_node *node = dev->of_node; /* Setup INTx */ - pcie->irq_domain = irq_domain_add_linear(node, INTX_NUM + 1, + pcie->irq_domain = irq_domain_add_linear(node, PCI_NUM_INTX, &intx_domain_ops, pcie); if (!pcie->irq_domain) { dev_err(dev, "Failed to get a INTx IRQ domain\n"); @@ -559,9 +558,9 @@ static int altera_pcie_parse_dt(struct altera_pcie *pcie) /* setup IRQ */ pcie->irq = platform_get_irq(pdev, 0); - if (pcie->irq <= 0) { + if (pcie->irq < 0) { dev_err(dev, "failed to get IRQ: %d\n", pcie->irq); - return -EINVAL; + return pcie->irq; } irq_set_chained_handler_and_data(pcie->irq, altera_pcie_isr, pcie); diff --git a/drivers/pci/host/pcie-iproc-msi.c b/drivers/pci/host/pcie-iproc-msi.c index 9fad7915f82a..2d0f535a2f69 100644 --- a/drivers/pci/host/pcie-iproc-msi.c +++ b/drivers/pci/host/pcie-iproc-msi.c @@ -317,7 +317,6 @@ static void iproc_msi_handler(struct irq_desc *desc) struct irq_chip *chip = irq_desc_get_chip(desc); struct iproc_msi_grp *grp; struct iproc_msi *msi; - struct iproc_pcie *pcie; u32 eq, head, tail, nr_events; unsigned long hwirq; int virq; @@ -326,7 +325,6 @@ static void iproc_msi_handler(struct irq_desc *desc) grp = irq_desc_get_handler_data(desc); msi = grp->msi; - pcie = msi->pcie; eq = grp->eq; /* diff --git a/drivers/pci/host/pcie-iproc-platform.c b/drivers/pci/host/pcie-iproc-platform.c index 22531190bc40..a5073a921a04 100644 --- a/drivers/pci/host/pcie-iproc-platform.c +++ b/drivers/pci/host/pcie-iproc-platform.c @@ -134,6 +134,13 @@ static int iproc_pcie_pltfm_remove(struct platform_device *pdev) return iproc_pcie_remove(pcie); } +static void iproc_pcie_pltfm_shutdown(struct platform_device *pdev) +{ + struct iproc_pcie *pcie = platform_get_drvdata(pdev); + + iproc_pcie_shutdown(pcie); +} + static struct platform_driver iproc_pcie_pltfm_driver = { .driver = { .name = "iproc-pcie", @@ -141,6 +148,7 @@ static struct platform_driver iproc_pcie_pltfm_driver = { }, .probe = iproc_pcie_pltfm_probe, .remove = iproc_pcie_pltfm_remove, + .shutdown = iproc_pcie_pltfm_shutdown, }; module_platform_driver(iproc_pcie_pltfm_driver); diff --git a/drivers/pci/host/pcie-iproc.c b/drivers/pci/host/pcie-iproc.c index c57486348856..3a8b9d20ee57 100644 --- a/drivers/pci/host/pcie-iproc.c +++ b/drivers/pci/host/pcie-iproc.c @@ -31,68 +31,71 @@ #include "pcie-iproc.h" -#define EP_PERST_SOURCE_SELECT_SHIFT 2 -#define EP_PERST_SOURCE_SELECT BIT(EP_PERST_SOURCE_SELECT_SHIFT) -#define EP_MODE_SURVIVE_PERST_SHIFT 1 -#define EP_MODE_SURVIVE_PERST BIT(EP_MODE_SURVIVE_PERST_SHIFT) -#define RC_PCIE_RST_OUTPUT_SHIFT 0 -#define RC_PCIE_RST_OUTPUT BIT(RC_PCIE_RST_OUTPUT_SHIFT) -#define PAXC_RESET_MASK 0x7f - -#define GIC_V3_CFG_SHIFT 0 -#define GIC_V3_CFG BIT(GIC_V3_CFG_SHIFT) - -#define MSI_ENABLE_CFG_SHIFT 0 -#define MSI_ENABLE_CFG BIT(MSI_ENABLE_CFG_SHIFT) - -#define CFG_IND_ADDR_MASK 0x00001ffc - -#define CFG_ADDR_BUS_NUM_SHIFT 20 -#define CFG_ADDR_BUS_NUM_MASK 0x0ff00000 -#define CFG_ADDR_DEV_NUM_SHIFT 15 -#define CFG_ADDR_DEV_NUM_MASK 0x000f8000 -#define CFG_ADDR_FUNC_NUM_SHIFT 12 -#define CFG_ADDR_FUNC_NUM_MASK 0x00007000 -#define CFG_ADDR_REG_NUM_SHIFT 2 -#define CFG_ADDR_REG_NUM_MASK 0x00000ffc -#define CFG_ADDR_CFG_TYPE_SHIFT 0 -#define CFG_ADDR_CFG_TYPE_MASK 0x00000003 - -#define SYS_RC_INTX_MASK 0xf - -#define PCIE_PHYLINKUP_SHIFT 3 -#define PCIE_PHYLINKUP BIT(PCIE_PHYLINKUP_SHIFT) -#define PCIE_DL_ACTIVE_SHIFT 2 -#define PCIE_DL_ACTIVE BIT(PCIE_DL_ACTIVE_SHIFT) - -#define APB_ERR_EN_SHIFT 0 -#define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) +#define EP_PERST_SOURCE_SELECT_SHIFT 2 +#define EP_PERST_SOURCE_SELECT BIT(EP_PERST_SOURCE_SELECT_SHIFT) +#define EP_MODE_SURVIVE_PERST_SHIFT 1 +#define EP_MODE_SURVIVE_PERST BIT(EP_MODE_SURVIVE_PERST_SHIFT) +#define RC_PCIE_RST_OUTPUT_SHIFT 0 +#define RC_PCIE_RST_OUTPUT BIT(RC_PCIE_RST_OUTPUT_SHIFT) +#define PAXC_RESET_MASK 0x7f + +#define GIC_V3_CFG_SHIFT 0 +#define GIC_V3_CFG BIT(GIC_V3_CFG_SHIFT) + +#define MSI_ENABLE_CFG_SHIFT 0 +#define MSI_ENABLE_CFG BIT(MSI_ENABLE_CFG_SHIFT) + +#define CFG_IND_ADDR_MASK 0x00001ffc + +#define CFG_ADDR_BUS_NUM_SHIFT 20 +#define CFG_ADDR_BUS_NUM_MASK 0x0ff00000 +#define CFG_ADDR_DEV_NUM_SHIFT 15 +#define CFG_ADDR_DEV_NUM_MASK 0x000f8000 +#define CFG_ADDR_FUNC_NUM_SHIFT 12 +#define CFG_ADDR_FUNC_NUM_MASK 0x00007000 +#define CFG_ADDR_REG_NUM_SHIFT 2 +#define CFG_ADDR_REG_NUM_MASK 0x00000ffc +#define CFG_ADDR_CFG_TYPE_SHIFT 0 +#define CFG_ADDR_CFG_TYPE_MASK 0x00000003 + +#define SYS_RC_INTX_MASK 0xf + +#define PCIE_PHYLINKUP_SHIFT 3 +#define PCIE_PHYLINKUP BIT(PCIE_PHYLINKUP_SHIFT) +#define PCIE_DL_ACTIVE_SHIFT 2 +#define PCIE_DL_ACTIVE BIT(PCIE_DL_ACTIVE_SHIFT) + +#define APB_ERR_EN_SHIFT 0 +#define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) + +#define CFG_RETRY_STATUS 0xffff0001 +#define CFG_RETRY_STATUS_TIMEOUT_US 500000 /* 500 milliseconds */ /* derive the enum index of the outbound/inbound mapping registers */ -#define MAP_REG(base_reg, index) ((base_reg) + (index) * 2) +#define MAP_REG(base_reg, index) ((base_reg) + (index) * 2) /* * Maximum number of outbound mapping window sizes that can be supported by any * OARR/OMAP mapping pair */ -#define MAX_NUM_OB_WINDOW_SIZES 4 +#define MAX_NUM_OB_WINDOW_SIZES 4 -#define OARR_VALID_SHIFT 0 -#define OARR_VALID BIT(OARR_VALID_SHIFT) -#define OARR_SIZE_CFG_SHIFT 1 +#define OARR_VALID_SHIFT 0 +#define OARR_VALID BIT(OARR_VALID_SHIFT) +#define OARR_SIZE_CFG_SHIFT 1 /* * Maximum number of inbound mapping region sizes that can be supported by an * IARR */ -#define MAX_NUM_IB_REGION_SIZES 9 +#define MAX_NUM_IB_REGION_SIZES 9 -#define IMAP_VALID_SHIFT 0 -#define IMAP_VALID BIT(IMAP_VALID_SHIFT) +#define IMAP_VALID_SHIFT 0 +#define IMAP_VALID BIT(IMAP_VALID_SHIFT) -#define PCI_EXP_CAP 0xac +#define IPROC_PCI_EXP_CAP 0xac -#define IPROC_PCIE_REG_INVALID 0xffff +#define IPROC_PCIE_REG_INVALID 0xffff /** * iProc PCIe outbound mapping controller specific parameters @@ -304,80 +307,80 @@ enum iproc_pcie_reg { /* iProc PCIe PAXB BCMA registers */ static const u16 iproc_pcie_reg_paxb_bcma[] = { - [IPROC_PCIE_CLK_CTRL] = 0x000, - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, - [IPROC_PCIE_CFG_IND_DATA] = 0x124, - [IPROC_PCIE_CFG_ADDR] = 0x1f8, - [IPROC_PCIE_CFG_DATA] = 0x1fc, - [IPROC_PCIE_INTX_EN] = 0x330, - [IPROC_PCIE_LINK_STATUS] = 0xf0c, + [IPROC_PCIE_CLK_CTRL] = 0x000, + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, + [IPROC_PCIE_CFG_IND_DATA] = 0x124, + [IPROC_PCIE_CFG_ADDR] = 0x1f8, + [IPROC_PCIE_CFG_DATA] = 0x1fc, + [IPROC_PCIE_INTX_EN] = 0x330, + [IPROC_PCIE_LINK_STATUS] = 0xf0c, }; /* iProc PCIe PAXB registers */ static const u16 iproc_pcie_reg_paxb[] = { - [IPROC_PCIE_CLK_CTRL] = 0x000, - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, - [IPROC_PCIE_CFG_IND_DATA] = 0x124, - [IPROC_PCIE_CFG_ADDR] = 0x1f8, - [IPROC_PCIE_CFG_DATA] = 0x1fc, - [IPROC_PCIE_INTX_EN] = 0x330, - [IPROC_PCIE_OARR0] = 0xd20, - [IPROC_PCIE_OMAP0] = 0xd40, - [IPROC_PCIE_OARR1] = 0xd28, - [IPROC_PCIE_OMAP1] = 0xd48, - [IPROC_PCIE_LINK_STATUS] = 0xf0c, - [IPROC_PCIE_APB_ERR_EN] = 0xf40, + [IPROC_PCIE_CLK_CTRL] = 0x000, + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, + [IPROC_PCIE_CFG_IND_DATA] = 0x124, + [IPROC_PCIE_CFG_ADDR] = 0x1f8, + [IPROC_PCIE_CFG_DATA] = 0x1fc, + [IPROC_PCIE_INTX_EN] = 0x330, + [IPROC_PCIE_OARR0] = 0xd20, + [IPROC_PCIE_OMAP0] = 0xd40, + [IPROC_PCIE_OARR1] = 0xd28, + [IPROC_PCIE_OMAP1] = 0xd48, + [IPROC_PCIE_LINK_STATUS] = 0xf0c, + [IPROC_PCIE_APB_ERR_EN] = 0xf40, }; /* iProc PCIe PAXB v2 registers */ static const u16 iproc_pcie_reg_paxb_v2[] = { - [IPROC_PCIE_CLK_CTRL] = 0x000, - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, - [IPROC_PCIE_CFG_IND_DATA] = 0x124, - [IPROC_PCIE_CFG_ADDR] = 0x1f8, - [IPROC_PCIE_CFG_DATA] = 0x1fc, - [IPROC_PCIE_INTX_EN] = 0x330, - [IPROC_PCIE_OARR0] = 0xd20, - [IPROC_PCIE_OMAP0] = 0xd40, - [IPROC_PCIE_OARR1] = 0xd28, - [IPROC_PCIE_OMAP1] = 0xd48, - [IPROC_PCIE_OARR2] = 0xd60, - [IPROC_PCIE_OMAP2] = 0xd68, - [IPROC_PCIE_OARR3] = 0xdf0, - [IPROC_PCIE_OMAP3] = 0xdf8, - [IPROC_PCIE_IARR0] = 0xd00, - [IPROC_PCIE_IMAP0] = 0xc00, - [IPROC_PCIE_IARR2] = 0xd10, - [IPROC_PCIE_IMAP2] = 0xcc0, - [IPROC_PCIE_IARR3] = 0xe00, - [IPROC_PCIE_IMAP3] = 0xe08, - [IPROC_PCIE_IARR4] = 0xe68, - [IPROC_PCIE_IMAP4] = 0xe70, - [IPROC_PCIE_LINK_STATUS] = 0xf0c, - [IPROC_PCIE_APB_ERR_EN] = 0xf40, + [IPROC_PCIE_CLK_CTRL] = 0x000, + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, + [IPROC_PCIE_CFG_IND_DATA] = 0x124, + [IPROC_PCIE_CFG_ADDR] = 0x1f8, + [IPROC_PCIE_CFG_DATA] = 0x1fc, + [IPROC_PCIE_INTX_EN] = 0x330, + [IPROC_PCIE_OARR0] = 0xd20, + [IPROC_PCIE_OMAP0] = 0xd40, + [IPROC_PCIE_OARR1] = 0xd28, + [IPROC_PCIE_OMAP1] = 0xd48, + [IPROC_PCIE_OARR2] = 0xd60, + [IPROC_PCIE_OMAP2] = 0xd68, + [IPROC_PCIE_OARR3] = 0xdf0, + [IPROC_PCIE_OMAP3] = 0xdf8, + [IPROC_PCIE_IARR0] = 0xd00, + [IPROC_PCIE_IMAP0] = 0xc00, + [IPROC_PCIE_IARR2] = 0xd10, + [IPROC_PCIE_IMAP2] = 0xcc0, + [IPROC_PCIE_IARR3] = 0xe00, + [IPROC_PCIE_IMAP3] = 0xe08, + [IPROC_PCIE_IARR4] = 0xe68, + [IPROC_PCIE_IMAP4] = 0xe70, + [IPROC_PCIE_LINK_STATUS] = 0xf0c, + [IPROC_PCIE_APB_ERR_EN] = 0xf40, }; /* iProc PCIe PAXC v1 registers */ static const u16 iproc_pcie_reg_paxc[] = { - [IPROC_PCIE_CLK_CTRL] = 0x000, - [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, - [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, - [IPROC_PCIE_CFG_ADDR] = 0x1f8, - [IPROC_PCIE_CFG_DATA] = 0x1fc, + [IPROC_PCIE_CLK_CTRL] = 0x000, + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, + [IPROC_PCIE_CFG_ADDR] = 0x1f8, + [IPROC_PCIE_CFG_DATA] = 0x1fc, }; /* iProc PCIe PAXC v2 registers */ static const u16 iproc_pcie_reg_paxc_v2[] = { - [IPROC_PCIE_MSI_GIC_MODE] = 0x050, - [IPROC_PCIE_MSI_BASE_ADDR] = 0x074, - [IPROC_PCIE_MSI_WINDOW_SIZE] = 0x078, - [IPROC_PCIE_MSI_ADDR_LO] = 0x07c, - [IPROC_PCIE_MSI_ADDR_HI] = 0x080, - [IPROC_PCIE_MSI_EN_CFG] = 0x09c, - [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, - [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, - [IPROC_PCIE_CFG_ADDR] = 0x1f8, - [IPROC_PCIE_CFG_DATA] = 0x1fc, + [IPROC_PCIE_MSI_GIC_MODE] = 0x050, + [IPROC_PCIE_MSI_BASE_ADDR] = 0x074, + [IPROC_PCIE_MSI_WINDOW_SIZE] = 0x078, + [IPROC_PCIE_MSI_ADDR_LO] = 0x07c, + [IPROC_PCIE_MSI_ADDR_HI] = 0x080, + [IPROC_PCIE_MSI_EN_CFG] = 0x09c, + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, + [IPROC_PCIE_CFG_ADDR] = 0x1f8, + [IPROC_PCIE_CFG_DATA] = 0x1fc, }; static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) @@ -448,18 +451,112 @@ static inline void iproc_pcie_apb_err_disable(struct pci_bus *bus, } } +static void __iomem *iproc_pcie_map_ep_cfg_reg(struct iproc_pcie *pcie, + unsigned int busno, + unsigned int slot, + unsigned int fn, + int where) +{ + u16 offset; + u32 val; + + /* EP device access */ + val = (busno << CFG_ADDR_BUS_NUM_SHIFT) | + (slot << CFG_ADDR_DEV_NUM_SHIFT) | + (fn << CFG_ADDR_FUNC_NUM_SHIFT) | + (where & CFG_ADDR_REG_NUM_MASK) | + (1 & CFG_ADDR_CFG_TYPE_MASK); + + iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_ADDR, val); + offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_DATA); + + if (iproc_pcie_reg_is_invalid(offset)) + return NULL; + + return (pcie->base + offset); +} + +static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p) +{ + int timeout = CFG_RETRY_STATUS_TIMEOUT_US; + unsigned int data; + + /* + * As per PCIe spec r3.1, sec 2.3.2, CRS Software Visibility only + * affects config reads of the Vendor ID. For config writes or any + * other config reads, the Root may automatically reissue the + * configuration request again as a new request. + * + * For config reads, this hardware returns CFG_RETRY_STATUS data + * when it receives a CRS completion, regardless of the address of + * the read or the CRS Software Visibility Enable bit. As a + * partial workaround for this, we retry in software any read that + * returns CFG_RETRY_STATUS. + * + * Note that a non-Vendor ID config register may have a value of + * CFG_RETRY_STATUS. If we read that, we can't distinguish it from + * a CRS completion, so we will incorrectly retry the read and + * eventually return the wrong data (0xffffffff). + */ + data = readl(cfg_data_p); + while (data == CFG_RETRY_STATUS && timeout--) { + udelay(1); + data = readl(cfg_data_p); + } + + if (data == CFG_RETRY_STATUS) + data = 0xffffffff; + + return data; +} + +static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn, + int where, int size, u32 *val) +{ + struct iproc_pcie *pcie = iproc_data(bus); + unsigned int slot = PCI_SLOT(devfn); + unsigned int fn = PCI_FUNC(devfn); + unsigned int busno = bus->number; + void __iomem *cfg_data_p; + unsigned int data; + int ret; + + /* root complex access */ + if (busno == 0) { + ret = pci_generic_config_read32(bus, devfn, where, size, val); + if (ret != PCIBIOS_SUCCESSFUL) + return ret; + + /* Don't advertise CRS SV support */ + if ((where & ~0x3) == IPROC_PCI_EXP_CAP + PCI_EXP_RTCTL) + *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); + return PCIBIOS_SUCCESSFUL; + } + + cfg_data_p = iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); + + if (!cfg_data_p) + return PCIBIOS_DEVICE_NOT_FOUND; + + data = iproc_pcie_cfg_retry(cfg_data_p); + + *val = data; + if (size <= 2) + *val = (data >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); + + return PCIBIOS_SUCCESSFUL; +} + /** * Note access to the configuration registers are protected at the higher layer * by 'pci_lock' in drivers/pci/access.c */ static void __iomem *iproc_pcie_map_cfg_bus(struct iproc_pcie *pcie, - int busno, - unsigned int devfn, + int busno, unsigned int devfn, int where) { unsigned slot = PCI_SLOT(devfn); unsigned fn = PCI_FUNC(devfn); - u32 val; u16 offset; /* root complex access */ @@ -484,18 +581,7 @@ static void __iomem *iproc_pcie_map_cfg_bus(struct iproc_pcie *pcie, if (slot > 0) return NULL; - /* EP device access */ - val = (busno << CFG_ADDR_BUS_NUM_SHIFT) | - (slot << CFG_ADDR_DEV_NUM_SHIFT) | - (fn << CFG_ADDR_FUNC_NUM_SHIFT) | - (where & CFG_ADDR_REG_NUM_MASK) | - (1 & CFG_ADDR_CFG_TYPE_MASK); - iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_ADDR, val); - offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_DATA); - if (iproc_pcie_reg_is_invalid(offset)) - return NULL; - else - return (pcie->base + offset); + return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); } static void __iomem *iproc_pcie_bus_map_cfg_bus(struct pci_bus *bus, @@ -554,9 +640,13 @@ static int iproc_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val) { int ret; + struct iproc_pcie *pcie = iproc_data(bus); iproc_pcie_apb_err_disable(bus, true); - ret = pci_generic_config_read32(bus, devfn, where, size, val); + if (pcie->type == IPROC_PCIE_PAXB_V2) + ret = iproc_pcie_config_read(bus, devfn, where, size, val); + else + ret = pci_generic_config_read32(bus, devfn, where, size, val); iproc_pcie_apb_err_disable(bus, false); return ret; @@ -580,7 +670,7 @@ static struct pci_ops iproc_pcie_ops = { .write = iproc_pcie_config_write32, }; -static void iproc_pcie_reset(struct iproc_pcie *pcie) +static void iproc_pcie_perst_ctrl(struct iproc_pcie *pcie, bool assert) { u32 val; @@ -592,26 +682,33 @@ static void iproc_pcie_reset(struct iproc_pcie *pcie) if (pcie->ep_is_internal) return; - /* - * Select perst_b signal as reset source. Put the device into reset, - * and then bring it out of reset - */ - val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); - val &= ~EP_PERST_SOURCE_SELECT & ~EP_MODE_SURVIVE_PERST & - ~RC_PCIE_RST_OUTPUT; - iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); - udelay(250); - - val |= RC_PCIE_RST_OUTPUT; - iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); - msleep(100); + if (assert) { + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); + val &= ~EP_PERST_SOURCE_SELECT & ~EP_MODE_SURVIVE_PERST & + ~RC_PCIE_RST_OUTPUT; + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); + udelay(250); + } else { + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); + val |= RC_PCIE_RST_OUTPUT; + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); + msleep(100); + } +} + +int iproc_pcie_shutdown(struct iproc_pcie *pcie) +{ + iproc_pcie_perst_ctrl(pcie, true); + msleep(500); + + return 0; } +EXPORT_SYMBOL_GPL(iproc_pcie_shutdown); static int iproc_pcie_check_link(struct iproc_pcie *pcie) { struct device *dev = pcie->dev; u32 hdr_type, link_ctrl, link_status, class, val; - u16 pos = PCI_EXP_CAP; bool link_is_active = false; /* @@ -628,16 +725,16 @@ static int iproc_pcie_check_link(struct iproc_pcie *pcie) } /* make sure we are not in EP mode */ - iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); + iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) { dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type); return -EFAULT; } /* force class to PCI_CLASS_BRIDGE_PCI (0x0604) */ -#define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c -#define PCI_CLASS_BRIDGE_MASK 0xffff00 -#define PCI_CLASS_BRIDGE_SHIFT 8 +#define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c +#define PCI_CLASS_BRIDGE_MASK 0xffff00 +#define PCI_CLASS_BRIDGE_SHIFT 8 iproc_pci_raw_config_read32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, 4, &class); class &= ~PCI_CLASS_BRIDGE_MASK; @@ -646,31 +743,31 @@ static int iproc_pcie_check_link(struct iproc_pcie *pcie) 4, class); /* check link status to see if link is active */ - iproc_pci_raw_config_read32(pcie, 0, pos + PCI_EXP_LNKSTA, + iproc_pci_raw_config_read32(pcie, 0, IPROC_PCI_EXP_CAP + PCI_EXP_LNKSTA, 2, &link_status); if (link_status & PCI_EXP_LNKSTA_NLW) link_is_active = true; if (!link_is_active) { /* try GEN 1 link speed */ -#define PCI_TARGET_LINK_SPEED_MASK 0xf -#define PCI_TARGET_LINK_SPEED_GEN2 0x2 -#define PCI_TARGET_LINK_SPEED_GEN1 0x1 +#define PCI_TARGET_LINK_SPEED_MASK 0xf +#define PCI_TARGET_LINK_SPEED_GEN2 0x2 +#define PCI_TARGET_LINK_SPEED_GEN1 0x1 iproc_pci_raw_config_read32(pcie, 0, - pos + PCI_EXP_LNKCTL2, 4, - &link_ctrl); + IPROC_PCI_EXP_CAP + PCI_EXP_LNKCTL2, + 4, &link_ctrl); if ((link_ctrl & PCI_TARGET_LINK_SPEED_MASK) == PCI_TARGET_LINK_SPEED_GEN2) { link_ctrl &= ~PCI_TARGET_LINK_SPEED_MASK; link_ctrl |= PCI_TARGET_LINK_SPEED_GEN1; iproc_pci_raw_config_write32(pcie, 0, - pos + PCI_EXP_LNKCTL2, - 4, link_ctrl); + IPROC_PCI_EXP_CAP + PCI_EXP_LNKCTL2, + 4, link_ctrl); msleep(100); iproc_pci_raw_config_read32(pcie, 0, - pos + PCI_EXP_LNKSTA, - 2, &link_status); + IPROC_PCI_EXP_CAP + PCI_EXP_LNKSTA, + 2, &link_status); if (link_status & PCI_EXP_LNKSTA_NLW) link_is_active = true; } @@ -1223,6 +1320,8 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie) pcie->ib.nr_regions = ARRAY_SIZE(paxb_v2_ib_map); pcie->ib_map = paxb_v2_ib_map; pcie->need_msi_steer = true; + dev_warn(dev, "reads of config registers that contain %#x return incorrect data\n", + CFG_RETRY_STATUS); break; case IPROC_PCIE_PAXC: regs = iproc_pcie_reg_paxc; @@ -1286,7 +1385,8 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) goto err_exit_phy; } - iproc_pcie_reset(pcie); + iproc_pcie_perst_ctrl(pcie, true); + iproc_pcie_perst_ctrl(pcie, false); if (pcie->need_ob_cfg) { ret = iproc_pcie_map_ranges(pcie, res); diff --git a/drivers/pci/host/pcie-iproc.h b/drivers/pci/host/pcie-iproc.h index 0bbe2ea44f3e..a6b55cec9a66 100644 --- a/drivers/pci/host/pcie-iproc.h +++ b/drivers/pci/host/pcie-iproc.h @@ -110,6 +110,7 @@ struct iproc_pcie { int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res); int iproc_pcie_remove(struct iproc_pcie *pcie); +int iproc_pcie_shutdown(struct iproc_pcie *pcie); #ifdef CONFIG_PCIE_IPROC_MSI int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node); diff --git a/drivers/pci/host/pcie-mediatek.c b/drivers/pci/host/pcie-mediatek.c index 5a9d8589ea0b..db93efdf1d63 100644 --- a/drivers/pci/host/pcie-mediatek.c +++ b/drivers/pci/host/pcie-mediatek.c @@ -3,6 +3,7 @@ * * Copyright (c) 2017 MediaTek Inc. * Author: Ryder Lee <ryder.lee@mediatek.com> + * Honghui Zhang <honghui.zhang@mediatek.com> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -16,6 +17,9 @@ #include <linux/clk.h> #include <linux/delay.h> +#include <linux/iopoll.h> +#include <linux/irq.h> +#include <linux/irqdomain.h> #include <linux/kernel.h> #include <linux/of_address.h> #include <linux/of_pci.h> @@ -63,16 +67,104 @@ #define PCIE_FC_CREDIT_MASK (GENMASK(31, 31) | GENMASK(28, 16)) #define PCIE_FC_CREDIT_VAL(x) ((x) << 16) +/* PCIe V2 share registers */ +#define PCIE_SYS_CFG_V2 0x0 +#define PCIE_CSR_LTSSM_EN(x) BIT(0 + (x) * 8) +#define PCIE_CSR_ASPM_L1_EN(x) BIT(1 + (x) * 8) + +/* PCIe V2 per-port registers */ +#define PCIE_MSI_VECTOR 0x0c0 +#define PCIE_INT_MASK 0x420 +#define INTX_MASK GENMASK(19, 16) +#define INTX_SHIFT 16 +#define PCIE_INT_STATUS 0x424 +#define MSI_STATUS BIT(23) +#define PCIE_IMSI_STATUS 0x42c +#define PCIE_IMSI_ADDR 0x430 +#define MSI_MASK BIT(23) +#define MTK_MSI_IRQS_NUM 32 + +#define PCIE_AHB_TRANS_BASE0_L 0x438 +#define PCIE_AHB_TRANS_BASE0_H 0x43c +#define AHB2PCIE_SIZE(x) ((x) & GENMASK(4, 0)) +#define PCIE_AXI_WINDOW0 0x448 +#define WIN_ENABLE BIT(7) + +/* PCIe V2 configuration transaction header */ +#define PCIE_CFG_HEADER0 0x460 +#define PCIE_CFG_HEADER1 0x464 +#define PCIE_CFG_HEADER2 0x468 +#define PCIE_CFG_WDATA 0x470 +#define PCIE_APP_TLP_REQ 0x488 +#define PCIE_CFG_RDATA 0x48c +#define APP_CFG_REQ BIT(0) +#define APP_CPL_STATUS GENMASK(7, 5) + +#define CFG_WRRD_TYPE_0 4 +#define CFG_WR_FMT 2 +#define CFG_RD_FMT 0 + +#define CFG_DW0_LENGTH(length) ((length) & GENMASK(9, 0)) +#define CFG_DW0_TYPE(type) (((type) << 24) & GENMASK(28, 24)) +#define CFG_DW0_FMT(fmt) (((fmt) << 29) & GENMASK(31, 29)) +#define CFG_DW2_REGN(regn) ((regn) & GENMASK(11, 2)) +#define CFG_DW2_FUN(fun) (((fun) << 16) & GENMASK(18, 16)) +#define CFG_DW2_DEV(dev) (((dev) << 19) & GENMASK(23, 19)) +#define CFG_DW2_BUS(bus) (((bus) << 24) & GENMASK(31, 24)) +#define CFG_HEADER_DW0(type, fmt) \ + (CFG_DW0_LENGTH(1) | CFG_DW0_TYPE(type) | CFG_DW0_FMT(fmt)) +#define CFG_HEADER_DW1(where, size) \ + (GENMASK(((size) - 1), 0) << ((where) & 0x3)) +#define CFG_HEADER_DW2(regn, fun, dev, bus) \ + (CFG_DW2_REGN(regn) | CFG_DW2_FUN(fun) | \ + CFG_DW2_DEV(dev) | CFG_DW2_BUS(bus)) + +#define PCIE_RST_CTRL 0x510 +#define PCIE_PHY_RSTB BIT(0) +#define PCIE_PIPE_SRSTB BIT(1) +#define PCIE_MAC_SRSTB BIT(2) +#define PCIE_CRSTB BIT(3) +#define PCIE_PERSTB BIT(8) +#define PCIE_LINKDOWN_RST_EN GENMASK(15, 13) +#define PCIE_LINK_STATUS_V2 0x804 +#define PCIE_PORT_LINKUP_V2 BIT(10) + +struct mtk_pcie_port; + +/** + * struct mtk_pcie_soc - differentiate between host generations + * @has_msi: whether this host supports MSI interrupts or not + * @ops: pointer to configuration access functions + * @startup: pointer to controller setting functions + * @setup_irq: pointer to initialize IRQ functions + */ +struct mtk_pcie_soc { + bool has_msi; + struct pci_ops *ops; + int (*startup)(struct mtk_pcie_port *port); + int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node); +}; + /** * struct mtk_pcie_port - PCIe port information * @base: IO mapped register base * @list: port list * @pcie: pointer to PCIe host info * @reset: pointer to port reset control - * @sys_ck: pointer to bus clock - * @phy: pointer to phy control block + * @sys_ck: pointer to transaction/data link layer clock + * @ahb_ck: pointer to AHB slave interface operating clock for CSR access + * and RC initiated MMIO access + * @axi_ck: pointer to application layer MMIO channel operating clock + * @aux_ck: pointer to pe2_mac_bridge and pe2_mac_core operating clock + * when pcie_mac_ck/pcie_pipe_ck is turned off + * @obff_ck: pointer to OBFF functional block operating clock + * @pipe_ck: pointer to LTSSM and PHY/MAC layer operating clock + * @phy: pointer to PHY control block * @lane: lane count - * @index: port index + * @slot: port slot + * @irq_domain: legacy INTx IRQ domain + * @msi_domain: MSI IRQ domain + * @msi_irq_in_use: bit map for assigned MSI IRQ */ struct mtk_pcie_port { void __iomem *base; @@ -80,9 +172,17 @@ struct mtk_pcie_port { struct mtk_pcie *pcie; struct reset_control *reset; struct clk *sys_ck; + struct clk *ahb_ck; + struct clk *axi_ck; + struct clk *aux_ck; + struct clk *obff_ck; + struct clk *pipe_ck; struct phy *phy; u32 lane; - u32 index; + u32 slot; + struct irq_domain *irq_domain; + struct irq_domain *msi_domain; + DECLARE_BITMAP(msi_irq_in_use, MTK_MSI_IRQS_NUM); }; /** @@ -96,6 +196,7 @@ struct mtk_pcie_port { * @busn: bus range * @offset: IO / Memory offset * @ports: pointer to PCIe port information + * @soc: pointer to SoC-dependent operations */ struct mtk_pcie { struct device *dev; @@ -111,13 +212,9 @@ struct mtk_pcie { resource_size_t io; } offset; struct list_head ports; + const struct mtk_pcie_soc *soc; }; -static inline bool mtk_pcie_link_up(struct mtk_pcie_port *port) -{ - return !!(readl(port->base + PCIE_LINK_STATUS) & PCIE_PORT_LINKUP); -} - static void mtk_pcie_subsys_powerdown(struct mtk_pcie *pcie) { struct device *dev = pcie->dev; @@ -146,6 +243,12 @@ static void mtk_pcie_put_resources(struct mtk_pcie *pcie) list_for_each_entry_safe(port, tmp, &pcie->ports, list) { phy_power_off(port->phy); + phy_exit(port->phy); + clk_disable_unprepare(port->pipe_ck); + clk_disable_unprepare(port->obff_ck); + clk_disable_unprepare(port->axi_ck); + clk_disable_unprepare(port->aux_ck); + clk_disable_unprepare(port->ahb_ck); clk_disable_unprepare(port->sys_ck); mtk_pcie_port_free(port); } @@ -153,11 +256,412 @@ static void mtk_pcie_put_resources(struct mtk_pcie *pcie) mtk_pcie_subsys_powerdown(pcie); } +static int mtk_pcie_check_cfg_cpld(struct mtk_pcie_port *port) +{ + u32 val; + int err; + + err = readl_poll_timeout_atomic(port->base + PCIE_APP_TLP_REQ, val, + !(val & APP_CFG_REQ), 10, + 100 * USEC_PER_MSEC); + if (err) + return PCIBIOS_SET_FAILED; + + if (readl(port->base + PCIE_APP_TLP_REQ) & APP_CPL_STATUS) + return PCIBIOS_SET_FAILED; + + return PCIBIOS_SUCCESSFUL; +} + +static int mtk_pcie_hw_rd_cfg(struct mtk_pcie_port *port, u32 bus, u32 devfn, + int where, int size, u32 *val) +{ + u32 tmp; + + /* Write PCIe configuration transaction header for Cfgrd */ + writel(CFG_HEADER_DW0(CFG_WRRD_TYPE_0, CFG_RD_FMT), + port->base + PCIE_CFG_HEADER0); + writel(CFG_HEADER_DW1(where, size), port->base + PCIE_CFG_HEADER1); + writel(CFG_HEADER_DW2(where, PCI_FUNC(devfn), PCI_SLOT(devfn), bus), + port->base + PCIE_CFG_HEADER2); + + /* Trigger h/w to transmit Cfgrd TLP */ + tmp = readl(port->base + PCIE_APP_TLP_REQ); + tmp |= APP_CFG_REQ; + writel(tmp, port->base + PCIE_APP_TLP_REQ); + + /* Check completion status */ + if (mtk_pcie_check_cfg_cpld(port)) + return PCIBIOS_SET_FAILED; + + /* Read cpld payload of Cfgrd */ + *val = readl(port->base + PCIE_CFG_RDATA); + + if (size == 1) + *val = (*val >> (8 * (where & 3))) & 0xff; + else if (size == 2) + *val = (*val >> (8 * (where & 3))) & 0xffff; + + return PCIBIOS_SUCCESSFUL; +} + +static int mtk_pcie_hw_wr_cfg(struct mtk_pcie_port *port, u32 bus, u32 devfn, + int where, int size, u32 val) +{ + /* Write PCIe configuration transaction header for Cfgwr */ + writel(CFG_HEADER_DW0(CFG_WRRD_TYPE_0, CFG_WR_FMT), + port->base + PCIE_CFG_HEADER0); + writel(CFG_HEADER_DW1(where, size), port->base + PCIE_CFG_HEADER1); + writel(CFG_HEADER_DW2(where, PCI_FUNC(devfn), PCI_SLOT(devfn), bus), + port->base + PCIE_CFG_HEADER2); + + /* Write Cfgwr data */ + val = val << 8 * (where & 3); + writel(val, port->base + PCIE_CFG_WDATA); + + /* Trigger h/w to transmit Cfgwr TLP */ + val = readl(port->base + PCIE_APP_TLP_REQ); + val |= APP_CFG_REQ; + writel(val, port->base + PCIE_APP_TLP_REQ); + + /* Check completion status */ + return mtk_pcie_check_cfg_cpld(port); +} + +static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus, + unsigned int devfn) +{ + struct mtk_pcie *pcie = bus->sysdata; + struct mtk_pcie_port *port; + + list_for_each_entry(port, &pcie->ports, list) + if (port->slot == PCI_SLOT(devfn)) + return port; + + return NULL; +} + +static int mtk_pcie_config_read(struct pci_bus *bus, unsigned int devfn, + int where, int size, u32 *val) +{ + struct mtk_pcie_port *port; + u32 bn = bus->number; + int ret; + + port = mtk_pcie_find_port(bus, devfn); + if (!port) { + *val = ~0; + return PCIBIOS_DEVICE_NOT_FOUND; + } + + ret = mtk_pcie_hw_rd_cfg(port, bn, devfn, where, size, val); + if (ret) + *val = ~0; + + return ret; +} + +static int mtk_pcie_config_write(struct pci_bus *bus, unsigned int devfn, + int where, int size, u32 val) +{ + struct mtk_pcie_port *port; + u32 bn = bus->number; + + port = mtk_pcie_find_port(bus, devfn); + if (!port) + return PCIBIOS_DEVICE_NOT_FOUND; + + return mtk_pcie_hw_wr_cfg(port, bn, devfn, where, size, val); +} + +static struct pci_ops mtk_pcie_ops_v2 = { + .read = mtk_pcie_config_read, + .write = mtk_pcie_config_write, +}; + +static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) +{ + struct mtk_pcie *pcie = port->pcie; + struct resource *mem = &pcie->mem; + u32 val; + size_t size; + int err; + + /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ + if (pcie->base) { + val = readl(pcie->base + PCIE_SYS_CFG_V2); + val |= PCIE_CSR_LTSSM_EN(port->slot) | + PCIE_CSR_ASPM_L1_EN(port->slot); + writel(val, pcie->base + PCIE_SYS_CFG_V2); + } + + /* Assert all reset signals */ + writel(0, port->base + PCIE_RST_CTRL); + + /* + * Enable PCIe link down reset, if link status changed from link up to + * link down, this will reset MAC control registers and configuration + * space. + */ + writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL); + + /* De-assert PHY, PE, PIPE, MAC and configuration reset */ + val = readl(port->base + PCIE_RST_CTRL); + val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB | + PCIE_MAC_SRSTB | PCIE_CRSTB; + writel(val, port->base + PCIE_RST_CTRL); + + /* 100ms timeout value should be enough for Gen1/2 training */ + err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, + !!(val & PCIE_PORT_LINKUP_V2), 20, + 100 * USEC_PER_MSEC); + if (err) + return -ETIMEDOUT; + + /* Set INTx mask */ + val = readl(port->base + PCIE_INT_MASK); + val &= ~INTX_MASK; + writel(val, port->base + PCIE_INT_MASK); + + /* Set AHB to PCIe translation windows */ + size = mem->end - mem->start; + val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); + writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); + + val = upper_32_bits(mem->start); + writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); + + /* Set PCIe to AXI translation memory space.*/ + val = fls(0xffffffff) | WIN_ENABLE; + writel(val, port->base + PCIE_AXI_WINDOW0); + + return 0; +} + +static int mtk_pcie_msi_alloc(struct mtk_pcie_port *port) +{ + int msi; + + msi = find_first_zero_bit(port->msi_irq_in_use, MTK_MSI_IRQS_NUM); + if (msi < MTK_MSI_IRQS_NUM) + set_bit(msi, port->msi_irq_in_use); + else + return -ENOSPC; + + return msi; +} + +static void mtk_pcie_msi_free(struct mtk_pcie_port *port, unsigned long hwirq) +{ + clear_bit(hwirq, port->msi_irq_in_use); +} + +static int mtk_pcie_msi_setup_irq(struct msi_controller *chip, + struct pci_dev *pdev, struct msi_desc *desc) +{ + struct mtk_pcie_port *port; + struct msi_msg msg; + unsigned int irq; + int hwirq; + phys_addr_t msg_addr; + + port = mtk_pcie_find_port(pdev->bus, pdev->devfn); + if (!port) + return -EINVAL; + + hwirq = mtk_pcie_msi_alloc(port); + if (hwirq < 0) + return hwirq; + + irq = irq_create_mapping(port->msi_domain, hwirq); + if (!irq) { + mtk_pcie_msi_free(port, hwirq); + return -EINVAL; + } + + chip->dev = &pdev->dev; + + irq_set_msi_desc(irq, desc); + + /* MT2712/MT7622 only support 32-bit MSI addresses */ + msg_addr = virt_to_phys(port->base + PCIE_MSI_VECTOR); + msg.address_hi = 0; + msg.address_lo = lower_32_bits(msg_addr); + msg.data = hwirq; + + pci_write_msi_msg(irq, &msg); + + return 0; +} + +static void mtk_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) +{ + struct pci_dev *pdev = to_pci_dev(chip->dev); + struct irq_data *d = irq_get_irq_data(irq); + irq_hw_number_t hwirq = irqd_to_hwirq(d); + struct mtk_pcie_port *port; + + port = mtk_pcie_find_port(pdev->bus, pdev->devfn); + if (!port) + return; + + irq_dispose_mapping(irq); + mtk_pcie_msi_free(port, hwirq); +} + +static struct msi_controller mtk_pcie_msi_chip = { + .setup_irq = mtk_pcie_msi_setup_irq, + .teardown_irq = mtk_msi_teardown_irq, +}; + +static struct irq_chip mtk_msi_irq_chip = { + .name = "MTK PCIe MSI", + .irq_enable = pci_msi_unmask_irq, + .irq_disable = pci_msi_mask_irq, + .irq_mask = pci_msi_mask_irq, + .irq_unmask = pci_msi_unmask_irq, +}; + +static int mtk_pcie_msi_map(struct irq_domain *domain, unsigned int irq, + irq_hw_number_t hwirq) +{ + irq_set_chip_and_handler(irq, &mtk_msi_irq_chip, handle_simple_irq); + irq_set_chip_data(irq, domain->host_data); + + return 0; +} + +static const struct irq_domain_ops msi_domain_ops = { + .map = mtk_pcie_msi_map, +}; + +static void mtk_pcie_enable_msi(struct mtk_pcie_port *port) +{ + u32 val; + phys_addr_t msg_addr; + + msg_addr = virt_to_phys(port->base + PCIE_MSI_VECTOR); + val = lower_32_bits(msg_addr); + writel(val, port->base + PCIE_IMSI_ADDR); + + val = readl(port->base + PCIE_INT_MASK); + val &= ~MSI_MASK; + writel(val, port->base + PCIE_INT_MASK); +} + +static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq, + irq_hw_number_t hwirq) +{ + irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); + irq_set_chip_data(irq, domain->host_data); + + return 0; +} + +static const struct irq_domain_ops intx_domain_ops = { + .map = mtk_pcie_intx_map, +}; + +static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port, + struct device_node *node) +{ + struct device *dev = port->pcie->dev; + struct device_node *pcie_intc_node; + + /* Setup INTx */ + pcie_intc_node = of_get_next_child(node, NULL); + if (!pcie_intc_node) { + dev_err(dev, "no PCIe Intc node found\n"); + return -ENODEV; + } + + port->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, + &intx_domain_ops, port); + if (!port->irq_domain) { + dev_err(dev, "failed to get INTx IRQ domain\n"); + return -ENODEV; + } + + if (IS_ENABLED(CONFIG_PCI_MSI)) { + port->msi_domain = irq_domain_add_linear(node, MTK_MSI_IRQS_NUM, + &msi_domain_ops, + &mtk_pcie_msi_chip); + if (!port->msi_domain) { + dev_err(dev, "failed to create MSI IRQ domain\n"); + return -ENODEV; + } + mtk_pcie_enable_msi(port); + } + + return 0; +} + +static irqreturn_t mtk_pcie_intr_handler(int irq, void *data) +{ + struct mtk_pcie_port *port = (struct mtk_pcie_port *)data; + unsigned long status; + u32 virq; + u32 bit = INTX_SHIFT; + + while ((status = readl(port->base + PCIE_INT_STATUS)) & INTX_MASK) { + for_each_set_bit_from(bit, &status, PCI_NUM_INTX + INTX_SHIFT) { + /* Clear the INTx */ + writel(1 << bit, port->base + PCIE_INT_STATUS); + virq = irq_find_mapping(port->irq_domain, + bit - INTX_SHIFT); + generic_handle_irq(virq); + } + } + + if (IS_ENABLED(CONFIG_PCI_MSI)) { + while ((status = readl(port->base + PCIE_INT_STATUS)) & MSI_STATUS) { + unsigned long imsi_status; + + while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) { + for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) { + /* Clear the MSI */ + writel(1 << bit, port->base + PCIE_IMSI_STATUS); + virq = irq_find_mapping(port->msi_domain, bit); + generic_handle_irq(virq); + } + } + /* Clear MSI interrupt status */ + writel(MSI_STATUS, port->base + PCIE_INT_STATUS); + } + } + + return IRQ_HANDLED; +} + +static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, + struct device_node *node) +{ + struct mtk_pcie *pcie = port->pcie; + struct device *dev = pcie->dev; + struct platform_device *pdev = to_platform_device(dev); + int err, irq; + + irq = platform_get_irq(pdev, port->slot); + err = devm_request_irq(dev, irq, mtk_pcie_intr_handler, + IRQF_SHARED, "mtk-pcie", port); + if (err) { + dev_err(dev, "unable to request IRQ %d\n", irq); + return err; + } + + err = mtk_pcie_init_irq_domain(port, node); + if (err) { + dev_err(dev, "failed to init PCIe IRQ domain\n"); + return err; + } + + return 0; +} + static void __iomem *mtk_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, int where) { - struct pci_host_bridge *host = pci_find_host_bridge(bus); - struct mtk_pcie *pcie = pci_host_bridge_priv(host); + struct mtk_pcie *pcie = bus->sysdata; writel(PCIE_CONF_ADDR(where, PCI_FUNC(devfn), PCI_SLOT(devfn), bus->number), pcie->base + PCIE_CFG_ADDR); @@ -171,16 +675,34 @@ static struct pci_ops mtk_pcie_ops = { .write = pci_generic_config_write, }; -static void mtk_pcie_configure_rc(struct mtk_pcie_port *port) +static int mtk_pcie_startup_port(struct mtk_pcie_port *port) { struct mtk_pcie *pcie = port->pcie; - u32 func = PCI_FUNC(port->index << 3); - u32 slot = PCI_SLOT(port->index << 3); + u32 func = PCI_FUNC(port->slot << 3); + u32 slot = PCI_SLOT(port->slot << 3); u32 val; + int err; + + /* assert port PERST_N */ + val = readl(pcie->base + PCIE_SYS_CFG); + val |= PCIE_PORT_PERST(port->slot); + writel(val, pcie->base + PCIE_SYS_CFG); + + /* de-assert port PERST_N */ + val = readl(pcie->base + PCIE_SYS_CFG); + val &= ~PCIE_PORT_PERST(port->slot); + writel(val, pcie->base + PCIE_SYS_CFG); + + /* 100ms timeout value should be enough for Gen1/2 training */ + err = readl_poll_timeout(port->base + PCIE_LINK_STATUS, val, + !!(val & PCIE_PORT_LINKUP), 20, + 100 * USEC_PER_MSEC); + if (err) + return -ETIMEDOUT; /* enable interrupt */ val = readl(pcie->base + PCIE_INT_ENABLE); - val |= PCIE_PORT_INT_EN(port->index); + val |= PCIE_PORT_INT_EN(port->slot); writel(val, pcie->base + PCIE_INT_ENABLE); /* map to all DDR region. We need to set it before cfg operation. */ @@ -209,67 +731,94 @@ static void mtk_pcie_configure_rc(struct mtk_pcie_port *port) writel(PCIE_CONF_ADDR(PCIE_FTS_NUM, func, slot, 0), pcie->base + PCIE_CFG_ADDR); writel(val, pcie->base + PCIE_CFG_DATA); + + return 0; } -static void mtk_pcie_assert_ports(struct mtk_pcie_port *port) +static void mtk_pcie_enable_port(struct mtk_pcie_port *port) { struct mtk_pcie *pcie = port->pcie; - u32 val; + struct device *dev = pcie->dev; + int err; - /* assert port PERST_N */ - val = readl(pcie->base + PCIE_SYS_CFG); - val |= PCIE_PORT_PERST(port->index); - writel(val, pcie->base + PCIE_SYS_CFG); + err = clk_prepare_enable(port->sys_ck); + if (err) { + dev_err(dev, "failed to enable sys_ck%d clock\n", port->slot); + goto err_sys_clk; + } - /* de-assert port PERST_N */ - val = readl(pcie->base + PCIE_SYS_CFG); - val &= ~PCIE_PORT_PERST(port->index); - writel(val, pcie->base + PCIE_SYS_CFG); + err = clk_prepare_enable(port->ahb_ck); + if (err) { + dev_err(dev, "failed to enable ahb_ck%d\n", port->slot); + goto err_ahb_clk; + } - /* PCIe v2.0 need at least 100ms delay to train from Gen1 to Gen2 */ - msleep(100); -} + err = clk_prepare_enable(port->aux_ck); + if (err) { + dev_err(dev, "failed to enable aux_ck%d\n", port->slot); + goto err_aux_clk; + } -static void mtk_pcie_enable_ports(struct mtk_pcie_port *port) -{ - struct device *dev = port->pcie->dev; - int err; + err = clk_prepare_enable(port->axi_ck); + if (err) { + dev_err(dev, "failed to enable axi_ck%d\n", port->slot); + goto err_axi_clk; + } - err = clk_prepare_enable(port->sys_ck); + err = clk_prepare_enable(port->obff_ck); if (err) { - dev_err(dev, "failed to enable port%d clock\n", port->index); - goto err_sys_clk; + dev_err(dev, "failed to enable obff_ck%d\n", port->slot); + goto err_obff_clk; + } + + err = clk_prepare_enable(port->pipe_ck); + if (err) { + dev_err(dev, "failed to enable pipe_ck%d\n", port->slot); + goto err_pipe_clk; } reset_control_assert(port->reset); reset_control_deassert(port->reset); + err = phy_init(port->phy); + if (err) { + dev_err(dev, "failed to initialize port%d phy\n", port->slot); + goto err_phy_init; + } + err = phy_power_on(port->phy); if (err) { - dev_err(dev, "failed to power on port%d phy\n", port->index); + dev_err(dev, "failed to power on port%d phy\n", port->slot); goto err_phy_on; } - mtk_pcie_assert_ports(port); - - /* if link up, then setup root port configuration space */ - if (mtk_pcie_link_up(port)) { - mtk_pcie_configure_rc(port); + if (!pcie->soc->startup(port)) return; - } - dev_info(dev, "Port%d link down\n", port->index); + dev_info(dev, "Port%d link down\n", port->slot); phy_power_off(port->phy); err_phy_on: + phy_exit(port->phy); +err_phy_init: + clk_disable_unprepare(port->pipe_ck); +err_pipe_clk: + clk_disable_unprepare(port->obff_ck); +err_obff_clk: + clk_disable_unprepare(port->axi_ck); +err_axi_clk: + clk_disable_unprepare(port->aux_ck); +err_aux_clk: + clk_disable_unprepare(port->ahb_ck); +err_ahb_clk: clk_disable_unprepare(port->sys_ck); err_sys_clk: mtk_pcie_port_free(port); } -static int mtk_pcie_parse_ports(struct mtk_pcie *pcie, - struct device_node *node, - int index) +static int mtk_pcie_parse_port(struct mtk_pcie *pcie, + struct device_node *node, + int slot) { struct mtk_pcie_port *port; struct resource *regs; @@ -288,34 +837,87 @@ static int mtk_pcie_parse_ports(struct mtk_pcie *pcie, return err; } - regs = platform_get_resource(pdev, IORESOURCE_MEM, index + 1); + snprintf(name, sizeof(name), "port%d", slot); + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); port->base = devm_ioremap_resource(dev, regs); if (IS_ERR(port->base)) { - dev_err(dev, "failed to map port%d base\n", index); + dev_err(dev, "failed to map port%d base\n", slot); return PTR_ERR(port->base); } - snprintf(name, sizeof(name), "sys_ck%d", index); + snprintf(name, sizeof(name), "sys_ck%d", slot); port->sys_ck = devm_clk_get(dev, name); if (IS_ERR(port->sys_ck)) { - dev_err(dev, "failed to get port%d clock\n", index); + dev_err(dev, "failed to get sys_ck%d clock\n", slot); return PTR_ERR(port->sys_ck); } - snprintf(name, sizeof(name), "pcie-rst%d", index); - port->reset = devm_reset_control_get_optional(dev, name); + /* sys_ck might be divided into the following parts in some chips */ + snprintf(name, sizeof(name), "ahb_ck%d", slot); + port->ahb_ck = devm_clk_get(dev, name); + if (IS_ERR(port->ahb_ck)) { + if (PTR_ERR(port->ahb_ck) == -EPROBE_DEFER) + return -EPROBE_DEFER; + + port->ahb_ck = NULL; + } + + snprintf(name, sizeof(name), "axi_ck%d", slot); + port->axi_ck = devm_clk_get(dev, name); + if (IS_ERR(port->axi_ck)) { + if (PTR_ERR(port->axi_ck) == -EPROBE_DEFER) + return -EPROBE_DEFER; + + port->axi_ck = NULL; + } + + snprintf(name, sizeof(name), "aux_ck%d", slot); + port->aux_ck = devm_clk_get(dev, name); + if (IS_ERR(port->aux_ck)) { + if (PTR_ERR(port->aux_ck) == -EPROBE_DEFER) + return -EPROBE_DEFER; + + port->aux_ck = NULL; + } + + snprintf(name, sizeof(name), "obff_ck%d", slot); + port->obff_ck = devm_clk_get(dev, name); + if (IS_ERR(port->obff_ck)) { + if (PTR_ERR(port->obff_ck) == -EPROBE_DEFER) + return -EPROBE_DEFER; + + port->obff_ck = NULL; + } + + snprintf(name, sizeof(name), "pipe_ck%d", slot); + port->pipe_ck = devm_clk_get(dev, name); + if (IS_ERR(port->pipe_ck)) { + if (PTR_ERR(port->pipe_ck) == -EPROBE_DEFER) + return -EPROBE_DEFER; + + port->pipe_ck = NULL; + } + + snprintf(name, sizeof(name), "pcie-rst%d", slot); + port->reset = devm_reset_control_get_optional_exclusive(dev, name); if (PTR_ERR(port->reset) == -EPROBE_DEFER) return PTR_ERR(port->reset); /* some platforms may use default PHY setting */ - snprintf(name, sizeof(name), "pcie-phy%d", index); + snprintf(name, sizeof(name), "pcie-phy%d", slot); port->phy = devm_phy_optional_get(dev, name); if (IS_ERR(port->phy)) return PTR_ERR(port->phy); - port->index = index; + port->slot = slot; port->pcie = pcie; + if (pcie->soc->setup_irq) { + err = pcie->soc->setup_irq(port, node); + if (err) + return err; + } + INIT_LIST_HEAD(&port->list); list_add_tail(&port->list, &pcie->ports); @@ -329,12 +931,14 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie) struct resource *regs; int err; - /* get shared registers */ - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); - pcie->base = devm_ioremap_resource(dev, regs); - if (IS_ERR(pcie->base)) { - dev_err(dev, "failed to map shared register\n"); - return PTR_ERR(pcie->base); + /* get shared registers, which are optional */ + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "subsys"); + if (regs) { + pcie->base = devm_ioremap_resource(dev, regs); + if (IS_ERR(pcie->base)) { + dev_err(dev, "failed to map shared register\n"); + return PTR_ERR(pcie->base); + } } pcie->free_ck = devm_clk_get(dev, "free_ck"); @@ -422,7 +1026,7 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie) } for_each_available_child_of_node(node, child) { - int index; + int slot; err = of_pci_get_devfn(child); if (err < 0) { @@ -430,9 +1034,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie) return err; } - index = PCI_SLOT(err); + slot = PCI_SLOT(err); - err = mtk_pcie_parse_ports(pcie, child, index); + err = mtk_pcie_parse_port(pcie, child, slot); if (err) return err; } @@ -443,7 +1047,7 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie) /* enable each port, and then check link status */ list_for_each_entry_safe(port, tmp, &pcie->ports, list) - mtk_pcie_enable_ports(port); + mtk_pcie_enable_port(port); /* power down PCIe subsys if slots are all empty (link down) */ if (list_empty(&pcie->ports)) @@ -480,9 +1084,12 @@ static int mtk_pcie_register_host(struct pci_host_bridge *host) host->busnr = pcie->busn.start; host->dev.parent = pcie->dev; - host->ops = &mtk_pcie_ops; + host->ops = pcie->soc->ops; host->map_irq = of_irq_parse_and_map_pci; host->swizzle_irq = pci_common_swizzle; + host->sysdata = pcie; + if (IS_ENABLED(CONFIG_PCI_MSI) && pcie->soc->has_msi) + host->msi = &mtk_pcie_msi_chip; err = pci_scan_root_bus_bridge(host); if (err < 0) @@ -513,6 +1120,7 @@ static int mtk_pcie_probe(struct platform_device *pdev) pcie = pci_host_bridge_priv(host); pcie->dev = dev; + pcie->soc = of_device_get_match_data(dev); platform_set_drvdata(pdev, pcie); INIT_LIST_HEAD(&pcie->ports); @@ -537,9 +1145,23 @@ put_resources: return err; } +static const struct mtk_pcie_soc mtk_pcie_soc_v1 = { + .ops = &mtk_pcie_ops, + .startup = mtk_pcie_startup_port, +}; + +static const struct mtk_pcie_soc mtk_pcie_soc_v2 = { + .has_msi = true, + .ops = &mtk_pcie_ops_v2, + .startup = mtk_pcie_startup_port_v2, + .setup_irq = mtk_pcie_setup_irq, +}; + static const struct of_device_id mtk_pcie_ids[] = { - { .compatible = "mediatek,mt7623-pcie"}, - { .compatible = "mediatek,mt2701-pcie"}, + { .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 }, + { .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 }, + { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_v2 }, + { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_v2 }, {}, }; diff --git a/drivers/pci/host/pcie-rcar.c b/drivers/pci/host/pcie-rcar.c index 246d485b24c6..4e0b25d09b0c 100644 --- a/drivers/pci/host/pcie-rcar.c +++ b/drivers/pci/host/pcie-rcar.c @@ -471,10 +471,8 @@ static int rcar_pcie_enable(struct rcar_pcie *pcie) bridge->msi = &pcie->msi.chip; ret = pci_scan_root_bus_bridge(bridge); - if (ret < 0) { - kfree(bridge); + if (ret < 0) return ret; - } bus = bridge->bus; @@ -1190,14 +1188,16 @@ static int rcar_pcie_probe(struct platform_device *pdev) return 0; -err_free_bridge: - pci_free_host_bridge(bridge); - err_pm_put: pm_runtime_put(dev); err_pm_disable: pm_runtime_disable(dev); + +err_free_bridge: + pci_free_host_bridge(bridge); + pci_free_resource_list(&pcie->resources); + return err; } diff --git a/drivers/pci/host/pcie-rockchip.c b/drivers/pci/host/pcie-rockchip.c index 7bb9870f6d8c..9051c6c8fea4 100644 --- a/drivers/pci/host/pcie-rockchip.c +++ b/drivers/pci/host/pcie-rockchip.c @@ -6,7 +6,7 @@ * Author: Shawn Lin <shawn.lin@rock-chips.com> * Wenrui Li <wenrui.li@rock-chips.com> * - * Bits taken from Synopsys Designware Host controller driver and + * Bits taken from Synopsys DesignWare Host controller driver and * ARM PCI Host generic driver. * * This program is free software: you can redistribute it and/or modify @@ -15,6 +15,7 @@ * (at your option) any later version. */ +#include <linux/bitrev.h> #include <linux/clk.h> #include <linux/delay.h> #include <linux/gpio/consumer.h> @@ -47,6 +48,7 @@ #define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val) #define ENCODE_LANES(x) ((((x) >> 1) & 3) << 4) +#define MAX_LANE_NUM 4 #define PCIE_CLIENT_BASE 0x0 #define PCIE_CLIENT_CONFIG (PCIE_CLIENT_BASE + 0x00) @@ -111,6 +113,9 @@ #define PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT 16 #define PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(x) \ (((x) >> 3) << PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT) +#define PCIE_CORE_LANE_MAP (PCIE_CORE_CTRL_MGMT_BASE + 0x200) +#define PCIE_CORE_LANE_MAP_MASK 0x0000000f +#define PCIE_CORE_LANE_MAP_REVERSE BIT(16) #define PCIE_CORE_INT_STATUS (PCIE_CORE_CTRL_MGMT_BASE + 0x20c) #define PCIE_CORE_INT_PRFPE BIT(0) #define PCIE_CORE_INT_CRFPE BIT(1) @@ -210,7 +215,8 @@ struct rockchip_pcie { void __iomem *reg_base; /* DT axi-base */ void __iomem *apb_base; /* DT apb-base */ - struct phy *phy; + bool legacy_phy; + struct phy *phys[MAX_LANE_NUM]; struct reset_control *core_rst; struct reset_control *mgmt_rst; struct reset_control *mgmt_sticky_rst; @@ -222,11 +228,13 @@ struct rockchip_pcie { struct clk *aclk_perf_pcie; struct clk *hclk_pcie; struct clk *clk_pcie_pm; + struct regulator *vpcie12v; /* 12V power supply */ struct regulator *vpcie3v3; /* 3.3V power supply */ struct regulator *vpcie1v8; /* 1.8V power supply */ struct regulator *vpcie0v9; /* 0.9V power supply */ struct gpio_desc *ep_gpio; u32 lanes; + u8 lanes_map; u8 root_bus_nr; int link_gen; struct device *dev; @@ -299,6 +307,24 @@ static int rockchip_pcie_valid_device(struct rockchip_pcie *rockchip, return 1; } +static u8 rockchip_pcie_lane_map(struct rockchip_pcie *rockchip) +{ + u32 val; + u8 map; + + if (rockchip->legacy_phy) + return GENMASK(MAX_LANE_NUM - 1, 0); + + val = rockchip_pcie_read(rockchip, PCIE_CORE_LANE_MAP); + map = val & PCIE_CORE_LANE_MAP_MASK; + + /* The link may be using a reverse-indexed mapping. */ + if (val & PCIE_CORE_LANE_MAP_REVERSE) + map = bitrev8(map) >> 4; + + return map; +} + static int rockchip_pcie_rd_own_conf(struct rockchip_pcie *rockchip, int where, int size, u32 *val) { @@ -514,10 +540,10 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip) static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) { struct device *dev = rockchip->dev; - int err; + int err, i; u32 status; - gpiod_set_value(rockchip->ep_gpio, 0); + gpiod_set_value_cansleep(rockchip->ep_gpio, 0); err = reset_control_assert(rockchip->aclk_rst); if (err) { @@ -537,34 +563,36 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) return err; } - err = phy_init(rockchip->phy); - if (err < 0) { - dev_err(dev, "fail to init phy, err %d\n", err); - return err; + for (i = 0; i < MAX_LANE_NUM; i++) { + err = phy_init(rockchip->phys[i]); + if (err) { + dev_err(dev, "init phy%d err %d\n", i, err); + goto err_exit_phy; + } } err = reset_control_assert(rockchip->core_rst); if (err) { dev_err(dev, "assert core_rst err %d\n", err); - return err; + goto err_exit_phy; } err = reset_control_assert(rockchip->mgmt_rst); if (err) { dev_err(dev, "assert mgmt_rst err %d\n", err); - return err; + goto err_exit_phy; } err = reset_control_assert(rockchip->mgmt_sticky_rst); if (err) { dev_err(dev, "assert mgmt_sticky_rst err %d\n", err); - return err; + goto err_exit_phy; } err = reset_control_assert(rockchip->pipe_rst); if (err) { dev_err(dev, "assert pipe_rst err %d\n", err); - return err; + goto err_exit_phy; } udelay(10); @@ -572,19 +600,19 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) err = reset_control_deassert(rockchip->pm_rst); if (err) { dev_err(dev, "deassert pm_rst err %d\n", err); - return err; + goto err_exit_phy; } err = reset_control_deassert(rockchip->aclk_rst); if (err) { dev_err(dev, "deassert aclk_rst err %d\n", err); - return err; + goto err_exit_phy; } err = reset_control_deassert(rockchip->pclk_rst); if (err) { dev_err(dev, "deassert pclk_rst err %d\n", err); - return err; + goto err_exit_phy; } if (rockchip->link_gen == 2) @@ -602,10 +630,12 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) PCIE_CLIENT_MODE_RC, PCIE_CLIENT_CONFIG); - err = phy_power_on(rockchip->phy); - if (err) { - dev_err(dev, "fail to power on phy, err %d\n", err); - return err; + for (i = 0; i < MAX_LANE_NUM; i++) { + err = phy_power_on(rockchip->phys[i]); + if (err) { + dev_err(dev, "power on phy%d err %d\n", i, err); + goto err_power_off_phy; + } } /* @@ -615,25 +645,25 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) err = reset_control_deassert(rockchip->mgmt_sticky_rst); if (err) { dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); - return err; + goto err_power_off_phy; } err = reset_control_deassert(rockchip->core_rst); if (err) { dev_err(dev, "deassert core_rst err %d\n", err); - return err; + goto err_power_off_phy; } err = reset_control_deassert(rockchip->mgmt_rst); if (err) { dev_err(dev, "deassert mgmt_rst err %d\n", err); - return err; + goto err_power_off_phy; } err = reset_control_deassert(rockchip->pipe_rst); if (err) { dev_err(dev, "deassert pipe_rst err %d\n", err); - return err; + goto err_power_off_phy; } /* Fix the transmitted FTS count desired to exit from L0s. */ @@ -658,7 +688,7 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, PCIE_CLIENT_CONFIG); - gpiod_set_value(rockchip->ep_gpio, 1); + gpiod_set_value_cansleep(rockchip->ep_gpio, 1); /* 500ms timeout value should be enough for Gen1/2 training */ err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1, @@ -666,7 +696,7 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) 500 * USEC_PER_MSEC); if (err) { dev_err(dev, "PCIe link training gen1 timeout!\n"); - return -ETIMEDOUT; + goto err_power_off_phy; } if (rockchip->link_gen == 2) { @@ -691,6 +721,15 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) PCIE_CORE_PL_CONF_LANE_SHIFT); dev_dbg(dev, "current link width is x%d\n", status); + /* Power off unused lane(s) */ + rockchip->lanes_map = rockchip_pcie_lane_map(rockchip); + for (i = 0; i < MAX_LANE_NUM; i++) { + if (!(rockchip->lanes_map & BIT(i))) { + dev_dbg(dev, "idling lane %d\n", i); + phy_power_off(rockchip->phys[i]); + } + } + rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, PCIE_CORE_CONFIG_VENDOR); rockchip_pcie_write(rockchip, @@ -715,6 +754,26 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR); return 0; +err_power_off_phy: + while (i--) + phy_power_off(rockchip->phys[i]); + i = MAX_LANE_NUM; +err_exit_phy: + while (i--) + phy_exit(rockchip->phys[i]); + return err; +} + +static void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip) +{ + int i; + + for (i = 0; i < MAX_LANE_NUM; i++) { + /* inactive lanes are already powered off */ + if (rockchip->lanes_map & BIT(i)) + phy_power_off(rockchip->phys[i]); + phy_exit(rockchip->phys[i]); + } } static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg) @@ -853,6 +912,91 @@ static void rockchip_pcie_legacy_int_handler(struct irq_desc *desc) chained_irq_exit(chip, desc); } +static int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip) +{ + struct device *dev = rockchip->dev; + struct phy *phy; + char *name; + u32 i; + + phy = devm_phy_get(dev, "pcie-phy"); + if (!IS_ERR(phy)) { + rockchip->legacy_phy = true; + rockchip->phys[0] = phy; + dev_warn(dev, "legacy phy model is deprecated!\n"); + return 0; + } + + if (PTR_ERR(phy) == -EPROBE_DEFER) + return PTR_ERR(phy); + + dev_dbg(dev, "missing legacy phy; search for per-lane PHY\n"); + + for (i = 0; i < MAX_LANE_NUM; i++) { + name = kasprintf(GFP_KERNEL, "pcie-phy-%u", i); + if (!name) + return -ENOMEM; + + phy = devm_of_phy_get(dev, dev->of_node, name); + kfree(name); + + if (IS_ERR(phy)) { + if (PTR_ERR(phy) != -EPROBE_DEFER) + dev_err(dev, "missing phy for lane %d: %ld\n", + i, PTR_ERR(phy)); + return PTR_ERR(phy); + } + + rockchip->phys[i] = phy; + } + + return 0; +} + +static int rockchip_pcie_setup_irq(struct rockchip_pcie *rockchip) +{ + int irq, err; + struct device *dev = rockchip->dev; + struct platform_device *pdev = to_platform_device(dev); + + irq = platform_get_irq_byname(pdev, "sys"); + if (irq < 0) { + dev_err(dev, "missing sys IRQ resource\n"); + return irq; + } + + err = devm_request_irq(dev, irq, rockchip_pcie_subsys_irq_handler, + IRQF_SHARED, "pcie-sys", rockchip); + if (err) { + dev_err(dev, "failed to request PCIe subsystem IRQ\n"); + return err; + } + + irq = platform_get_irq_byname(pdev, "legacy"); + if (irq < 0) { + dev_err(dev, "missing legacy IRQ resource\n"); + return irq; + } + + irq_set_chained_handler_and_data(irq, + rockchip_pcie_legacy_int_handler, + rockchip); + + irq = platform_get_irq_byname(pdev, "client"); + if (irq < 0) { + dev_err(dev, "missing client IRQ resource\n"); + return irq; + } + + err = devm_request_irq(dev, irq, rockchip_pcie_client_irq_handler, + IRQF_SHARED, "pcie-client", rockchip); + if (err) { + dev_err(dev, "failed to request PCIe client IRQ\n"); + return err; + } + + return 0; +} /** * rockchip_pcie_parse_dt - Parse Device Tree @@ -866,7 +1010,6 @@ static int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) struct platform_device *pdev = to_platform_device(dev); struct device_node *node = dev->of_node; struct resource *regs; - int irq; int err; regs = platform_get_resource_byname(pdev, @@ -883,12 +1026,9 @@ static int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) if (IS_ERR(rockchip->apb_base)) return PTR_ERR(rockchip->apb_base); - rockchip->phy = devm_phy_get(dev, "pcie-phy"); - if (IS_ERR(rockchip->phy)) { - if (PTR_ERR(rockchip->phy) != -EPROBE_DEFER) - dev_err(dev, "missing phy\n"); - return PTR_ERR(rockchip->phy); - } + err = rockchip_pcie_get_phys(rockchip); + if (err) + return err; rockchip->lanes = 1; err = of_property_read_u32(node, "num-lanes", &rockchip->lanes); @@ -903,49 +1043,50 @@ static int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) if (rockchip->link_gen < 0 || rockchip->link_gen > 2) rockchip->link_gen = 2; - rockchip->core_rst = devm_reset_control_get(dev, "core"); + rockchip->core_rst = devm_reset_control_get_exclusive(dev, "core"); if (IS_ERR(rockchip->core_rst)) { if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER) dev_err(dev, "missing core reset property in node\n"); return PTR_ERR(rockchip->core_rst); } - rockchip->mgmt_rst = devm_reset_control_get(dev, "mgmt"); + rockchip->mgmt_rst = devm_reset_control_get_exclusive(dev, "mgmt"); if (IS_ERR(rockchip->mgmt_rst)) { if (PTR_ERR(rockchip->mgmt_rst) != -EPROBE_DEFER) dev_err(dev, "missing mgmt reset property in node\n"); return PTR_ERR(rockchip->mgmt_rst); } - rockchip->mgmt_sticky_rst = devm_reset_control_get(dev, "mgmt-sticky"); + rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev, + "mgmt-sticky"); if (IS_ERR(rockchip->mgmt_sticky_rst)) { if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER) dev_err(dev, "missing mgmt-sticky reset property in node\n"); return PTR_ERR(rockchip->mgmt_sticky_rst); } - rockchip->pipe_rst = devm_reset_control_get(dev, "pipe"); + rockchip->pipe_rst = devm_reset_control_get_exclusive(dev, "pipe"); if (IS_ERR(rockchip->pipe_rst)) { if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER) dev_err(dev, "missing pipe reset property in node\n"); return PTR_ERR(rockchip->pipe_rst); } - rockchip->pm_rst = devm_reset_control_get(dev, "pm"); + rockchip->pm_rst = devm_reset_control_get_exclusive(dev, "pm"); if (IS_ERR(rockchip->pm_rst)) { if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER) dev_err(dev, "missing pm reset property in node\n"); return PTR_ERR(rockchip->pm_rst); } - rockchip->pclk_rst = devm_reset_control_get(dev, "pclk"); + rockchip->pclk_rst = devm_reset_control_get_exclusive(dev, "pclk"); if (IS_ERR(rockchip->pclk_rst)) { if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER) dev_err(dev, "missing pclk reset property in node\n"); return PTR_ERR(rockchip->pclk_rst); } - rockchip->aclk_rst = devm_reset_control_get(dev, "aclk"); + rockchip->aclk_rst = devm_reset_control_get_exclusive(dev, "aclk"); if (IS_ERR(rockchip->aclk_rst)) { if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER) dev_err(dev, "missing aclk reset property in node\n"); @@ -982,40 +1123,15 @@ static int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) return PTR_ERR(rockchip->clk_pcie_pm); } - irq = platform_get_irq_byname(pdev, "sys"); - if (irq < 0) { - dev_err(dev, "missing sys IRQ resource\n"); - return -EINVAL; - } - - err = devm_request_irq(dev, irq, rockchip_pcie_subsys_irq_handler, - IRQF_SHARED, "pcie-sys", rockchip); - if (err) { - dev_err(dev, "failed to request PCIe subsystem IRQ\n"); + err = rockchip_pcie_setup_irq(rockchip); + if (err) return err; - } - - irq = platform_get_irq_byname(pdev, "legacy"); - if (irq < 0) { - dev_err(dev, "missing legacy IRQ resource\n"); - return -EINVAL; - } - - irq_set_chained_handler_and_data(irq, - rockchip_pcie_legacy_int_handler, - rockchip); - - irq = platform_get_irq_byname(pdev, "client"); - if (irq < 0) { - dev_err(dev, "missing client IRQ resource\n"); - return -EINVAL; - } - err = devm_request_irq(dev, irq, rockchip_pcie_client_irq_handler, - IRQF_SHARED, "pcie-client", rockchip); - if (err) { - dev_err(dev, "failed to request PCIe client IRQ\n"); - return err; + rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); + if (IS_ERR(rockchip->vpcie12v)) { + if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER) + return -EPROBE_DEFER; + dev_info(dev, "no vpcie12v regulator found\n"); } rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); @@ -1047,11 +1163,19 @@ static int rockchip_pcie_set_vpcie(struct rockchip_pcie *rockchip) struct device *dev = rockchip->dev; int err; + if (!IS_ERR(rockchip->vpcie12v)) { + err = regulator_enable(rockchip->vpcie12v); + if (err) { + dev_err(dev, "fail to enable vpcie12v regulator\n"); + goto err_out; + } + } + if (!IS_ERR(rockchip->vpcie3v3)) { err = regulator_enable(rockchip->vpcie3v3); if (err) { dev_err(dev, "fail to enable vpcie3v3 regulator\n"); - goto err_out; + goto err_disable_12v; } } @@ -1079,6 +1203,9 @@ err_disable_1v8: err_disable_3v3: if (!IS_ERR(rockchip->vpcie3v3)) regulator_disable(rockchip->vpcie3v3); +err_disable_12v: + if (!IS_ERR(rockchip->vpcie12v)) + regulator_disable(rockchip->vpcie12v); err_out: return err; } @@ -1116,7 +1243,7 @@ static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip) return -EINVAL; } - rockchip->irq_domain = irq_domain_add_linear(intc, 4, + rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX, &intx_domain_ops, rockchip); if (!rockchip->irq_domain) { dev_err(dev, "failed to get a INTx IRQ domain\n"); @@ -1270,6 +1397,56 @@ static int rockchip_pcie_wait_l2(struct rockchip_pcie *rockchip) return 0; } +static int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip) +{ + struct device *dev = rockchip->dev; + int err; + + err = clk_prepare_enable(rockchip->aclk_pcie); + if (err) { + dev_err(dev, "unable to enable aclk_pcie clock\n"); + return err; + } + + err = clk_prepare_enable(rockchip->aclk_perf_pcie); + if (err) { + dev_err(dev, "unable to enable aclk_perf_pcie clock\n"); + goto err_aclk_perf_pcie; + } + + err = clk_prepare_enable(rockchip->hclk_pcie); + if (err) { + dev_err(dev, "unable to enable hclk_pcie clock\n"); + goto err_hclk_pcie; + } + + err = clk_prepare_enable(rockchip->clk_pcie_pm); + if (err) { + dev_err(dev, "unable to enable clk_pcie_pm clock\n"); + goto err_clk_pcie_pm; + } + + return 0; + +err_clk_pcie_pm: + clk_disable_unprepare(rockchip->hclk_pcie); +err_hclk_pcie: + clk_disable_unprepare(rockchip->aclk_perf_pcie); +err_aclk_perf_pcie: + clk_disable_unprepare(rockchip->aclk_pcie); + return err; +} + +static void rockchip_pcie_disable_clocks(void *data) +{ + struct rockchip_pcie *rockchip = data; + + clk_disable_unprepare(rockchip->clk_pcie_pm); + clk_disable_unprepare(rockchip->hclk_pcie); + clk_disable_unprepare(rockchip->aclk_perf_pcie); + clk_disable_unprepare(rockchip->aclk_pcie); +} + static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) { struct rockchip_pcie *rockchip = dev_get_drvdata(dev); @@ -1286,13 +1463,9 @@ static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) return ret; } - phy_power_off(rockchip->phy); - phy_exit(rockchip->phy); + rockchip_pcie_deinit_phys(rockchip); - clk_disable_unprepare(rockchip->clk_pcie_pm); - clk_disable_unprepare(rockchip->hclk_pcie); - clk_disable_unprepare(rockchip->aclk_perf_pcie); - clk_disable_unprepare(rockchip->aclk_pcie); + rockchip_pcie_disable_clocks(rockchip); if (!IS_ERR(rockchip->vpcie0v9)) regulator_disable(rockchip->vpcie0v9); @@ -1313,21 +1486,9 @@ static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) } } - err = clk_prepare_enable(rockchip->clk_pcie_pm); + err = rockchip_pcie_enable_clocks(rockchip); if (err) - goto err_pcie_pm; - - err = clk_prepare_enable(rockchip->hclk_pcie); - if (err) - goto err_hclk_pcie; - - err = clk_prepare_enable(rockchip->aclk_perf_pcie); - if (err) - goto err_aclk_perf_pcie; - - err = clk_prepare_enable(rockchip->aclk_pcie); - if (err) - goto err_aclk_pcie; + goto err_disable_0v9; err = rockchip_pcie_init_port(rockchip); if (err) @@ -1335,7 +1496,7 @@ static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) err = rockchip_pcie_cfg_atu(rockchip); if (err) - goto err_pcie_resume; + goto err_err_deinit_port; /* Need this to enter L1 again */ rockchip_pcie_update_txcredit_mui(rockchip); @@ -1343,15 +1504,13 @@ static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) return 0; +err_err_deinit_port: + rockchip_pcie_deinit_phys(rockchip); err_pcie_resume: - clk_disable_unprepare(rockchip->aclk_pcie); -err_aclk_pcie: - clk_disable_unprepare(rockchip->aclk_perf_pcie); -err_aclk_perf_pcie: - clk_disable_unprepare(rockchip->hclk_pcie); -err_hclk_pcie: - clk_disable_unprepare(rockchip->clk_pcie_pm); -err_pcie_pm: + rockchip_pcie_disable_clocks(rockchip); +err_disable_0v9: + if (!IS_ERR(rockchip->vpcie0v9)) + regulator_disable(rockchip->vpcie0v9); return err; } @@ -1385,29 +1544,9 @@ static int rockchip_pcie_probe(struct platform_device *pdev) if (err) return err; - err = clk_prepare_enable(rockchip->aclk_pcie); - if (err) { - dev_err(dev, "unable to enable aclk_pcie clock\n"); - goto err_aclk_pcie; - } - - err = clk_prepare_enable(rockchip->aclk_perf_pcie); - if (err) { - dev_err(dev, "unable to enable aclk_perf_pcie clock\n"); - goto err_aclk_perf_pcie; - } - - err = clk_prepare_enable(rockchip->hclk_pcie); - if (err) { - dev_err(dev, "unable to enable hclk_pcie clock\n"); - goto err_hclk_pcie; - } - - err = clk_prepare_enable(rockchip->clk_pcie_pm); - if (err) { - dev_err(dev, "unable to enable hclk_pcie clock\n"); - goto err_pcie_pm; - } + err = rockchip_pcie_enable_clocks(rockchip); + if (err) + return err; err = rockchip_pcie_set_vpcie(rockchip); if (err) { @@ -1423,12 +1562,12 @@ static int rockchip_pcie_probe(struct platform_device *pdev) err = rockchip_pcie_init_irq_domain(rockchip); if (err < 0) - goto err_vpcie; + goto err_deinit_port; err = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, &res, &io_base); if (err) - goto err_vpcie; + goto err_remove_irq_domain; err = devm_request_pci_bus_resources(dev, &res); if (err) @@ -1466,12 +1605,12 @@ static int rockchip_pcie_probe(struct platform_device *pdev) err = rockchip_pcie_cfg_atu(rockchip); if (err) - goto err_free_res; + goto err_unmap_iospace; rockchip->msg_region = devm_ioremap(dev, rockchip->msg_bus_addr, SZ_1M); if (!rockchip->msg_region) { err = -ENOMEM; - goto err_free_res; + goto err_unmap_iospace; } list_splice_init(&res, &bridge->windows); @@ -1484,7 +1623,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev) err = pci_scan_root_bus_bridge(bridge); if (err < 0) - goto err_free_res; + goto err_unmap_iospace; bus = bridge->bus; @@ -1498,9 +1637,17 @@ static int rockchip_pcie_probe(struct platform_device *pdev) pci_bus_add_devices(bus); return 0; +err_unmap_iospace: + pci_unmap_iospace(rockchip->io); err_free_res: pci_free_resource_list(&res); +err_remove_irq_domain: + irq_domain_remove(rockchip->irq_domain); +err_deinit_port: + rockchip_pcie_deinit_phys(rockchip); err_vpcie: + if (!IS_ERR(rockchip->vpcie12v)) + regulator_disable(rockchip->vpcie12v); if (!IS_ERR(rockchip->vpcie3v3)) regulator_disable(rockchip->vpcie3v3); if (!IS_ERR(rockchip->vpcie1v8)) @@ -1508,14 +1655,7 @@ err_vpcie: if (!IS_ERR(rockchip->vpcie0v9)) regulator_disable(rockchip->vpcie0v9); err_set_vpcie: - clk_disable_unprepare(rockchip->clk_pcie_pm); -err_pcie_pm: - clk_disable_unprepare(rockchip->hclk_pcie); -err_hclk_pcie: - clk_disable_unprepare(rockchip->aclk_perf_pcie); -err_aclk_perf_pcie: - clk_disable_unprepare(rockchip->aclk_pcie); -err_aclk_pcie: + rockchip_pcie_disable_clocks(rockchip); return err; } @@ -1529,14 +1669,12 @@ static int rockchip_pcie_remove(struct platform_device *pdev) pci_unmap_iospace(rockchip->io); irq_domain_remove(rockchip->irq_domain); - phy_power_off(rockchip->phy); - phy_exit(rockchip->phy); + rockchip_pcie_deinit_phys(rockchip); - clk_disable_unprepare(rockchip->clk_pcie_pm); - clk_disable_unprepare(rockchip->hclk_pcie); - clk_disable_unprepare(rockchip->aclk_perf_pcie); - clk_disable_unprepare(rockchip->aclk_pcie); + rockchip_pcie_disable_clocks(rockchip); + if (!IS_ERR(rockchip->vpcie12v)) + regulator_disable(rockchip->vpcie12v); if (!IS_ERR(rockchip->vpcie3v3)) regulator_disable(rockchip->vpcie3v3); if (!IS_ERR(rockchip->vpcie1v8)) diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c index eec641a34fc5..65dea98b2643 100644 --- a/drivers/pci/host/pcie-xilinx-nwl.c +++ b/drivers/pci/host/pcie-xilinx-nwl.c @@ -133,7 +133,6 @@ #define CFG_DMA_REG_BAR GENMASK(2, 0) #define INT_PCI_MSI_NR (2 * 32) -#define INTX_NUM 4 /* Readin the PS_LINKUP */ #define PS_LINKUP_OFFSET 0x00000238 @@ -334,9 +333,8 @@ static void nwl_pcie_leg_handler(struct irq_desc *desc) while ((status = nwl_bridge_readl(pcie, MSGF_LEG_STATUS) & MSGF_LEG_SR_MASKALL) != 0) { - for_each_set_bit(bit, &status, INTX_NUM) { - virq = irq_find_mapping(pcie->legacy_irq_domain, - bit + 1); + for_each_set_bit(bit, &status, PCI_NUM_INTX) { + virq = irq_find_mapping(pcie->legacy_irq_domain, bit); if (virq) generic_handle_irq(virq); } @@ -436,6 +434,7 @@ static int nwl_legacy_map(struct irq_domain *domain, unsigned int irq, static const struct irq_domain_ops legacy_domain_ops = { .map = nwl_legacy_map, + .xlate = pci_irqd_intx_xlate, }; #ifdef CONFIG_PCI_MSI @@ -559,7 +558,7 @@ static int nwl_pcie_init_irq_domain(struct nwl_pcie *pcie) } pcie->legacy_irq_domain = irq_domain_add_linear(legacy_intc_node, - INTX_NUM, + PCI_NUM_INTX, &legacy_domain_ops, pcie); @@ -813,7 +812,7 @@ static int nwl_pcie_parse_dt(struct nwl_pcie *pcie, pcie->irq_intx = platform_get_irq_byname(pdev, "intx"); if (pcie->irq_intx < 0) { dev_err(dev, "failed to get intx IRQ %d\n", pcie->irq_intx); - return -EINVAL; + return pcie->irq_intx; } irq_set_chained_handler_and_data(pcie->irq_intx, diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c index f63fa5e0278c..94e13cb8608f 100644 --- a/drivers/pci/host/pcie-xilinx.c +++ b/drivers/pci/host/pcie-xilinx.c @@ -5,7 +5,7 @@ * * Based on the Tegra PCIe driver * - * Bits taken from Synopsys Designware Host controller driver and + * Bits taken from Synopsys DesignWare Host controller driver and * ARM PCI Host generic driver. * * This program is free software: you can redistribute it and/or modify @@ -60,6 +60,7 @@ #define XILINX_PCIE_INTR_MST_SLVERR BIT(27) #define XILINX_PCIE_INTR_MST_ERRP BIT(28) #define XILINX_PCIE_IMR_ALL_MASK 0x1FF30FED +#define XILINX_PCIE_IMR_ENABLE_MASK 0x1FF30F0D #define XILINX_PCIE_IDR_ALL_MASK 0xFFFFFFFF /* Root Port Error FIFO Read Register definitions */ @@ -369,6 +370,7 @@ static int xilinx_pcie_intx_map(struct irq_domain *domain, unsigned int irq, /* INTx IRQ Domain operations */ static const struct irq_domain_ops intx_domain_ops = { .map = xilinx_pcie_intx_map, + .xlate = pci_irqd_intx_xlate, }; /* PCIe HW Functions */ @@ -384,7 +386,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data) { struct xilinx_pcie_port *port = (struct xilinx_pcie_port *)data; struct device *dev = port->dev; - u32 val, mask, status, msi_data; + u32 val, mask, status; /* Read interrupt decode and mask registers */ val = pcie_read(port, XILINX_PCIE_REG_IDR); @@ -424,8 +426,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data) xilinx_pcie_clear_err_interrupts(port); } - if (status & XILINX_PCIE_INTR_INTX) { - /* INTx interrupt received */ + if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) { val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); /* Check whether interrupt valid */ @@ -434,41 +435,24 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data) goto error; } - if (!(val & XILINX_PCIE_RPIFR1_MSI_INTR)) { - /* Clear interrupt FIFO register 1 */ - pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, - XILINX_PCIE_REG_RPIFR1); - - /* Handle INTx Interrupt */ - val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >> - XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1; - generic_handle_irq(irq_find_mapping(port->leg_domain, - val)); - } - } - - if (status & XILINX_PCIE_INTR_MSI) { - /* MSI Interrupt */ - val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); - - if (!(val & XILINX_PCIE_RPIFR1_INTR_VALID)) { - dev_warn(dev, "RP Intr FIFO1 read error\n"); - goto error; - } - + /* Decode the IRQ number */ if (val & XILINX_PCIE_RPIFR1_MSI_INTR) { - msi_data = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & - XILINX_PCIE_RPIFR2_MSG_DATA; + val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & + XILINX_PCIE_RPIFR2_MSG_DATA; + } else { + val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >> + XILINX_PCIE_RPIFR1_INTR_SHIFT; + val = irq_find_mapping(port->leg_domain, val); + } - /* Clear interrupt FIFO register 1 */ - pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, - XILINX_PCIE_REG_RPIFR1); + /* Clear interrupt FIFO register 1 */ + pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, + XILINX_PCIE_REG_RPIFR1); - if (IS_ENABLED(CONFIG_PCI_MSI)) { - /* Handle MSI Interrupt */ - generic_handle_irq(msi_data); - } - } + /* Handle the interrupt */ + if (IS_ENABLED(CONFIG_PCI_MSI) || + !(val & XILINX_PCIE_RPIFR1_MSI_INTR)) + generic_handle_irq(val); } if (status & XILINX_PCIE_INTR_SLV_UNSUPP) @@ -524,7 +508,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port) return -ENODEV; } - port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4, + port->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, &intx_domain_ops, port); if (!port->leg_domain) { @@ -571,8 +555,8 @@ static void xilinx_pcie_init_port(struct xilinx_pcie_port *port) XILINX_PCIE_IMR_ALL_MASK, XILINX_PCIE_REG_IDR); - /* Enable all interrupts */ - pcie_write(port, XILINX_PCIE_IMR_ALL_MASK, XILINX_PCIE_REG_IMR); + /* Enable all interrupts we handle */ + pcie_write(port, XILINX_PCIE_IMR_ENABLE_MASK, XILINX_PCIE_REG_IMR); /* Enable the Bridge enable bit */ pcie_write(port, pcie_read(port, XILINX_PCIE_REG_RPSC) | diff --git a/drivers/pci/host/vmd.c b/drivers/pci/host/vmd.c index 6088c3083194..509893bc3e63 100644 --- a/drivers/pci/host/vmd.c +++ b/drivers/pci/host/vmd.c @@ -183,7 +183,7 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d int i, best = 1; unsigned long flags; - if (!desc->msi_attrib.is_msix || vmd->msix_count == 1) + if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1) return &vmd->irqs[0]; raw_spin_lock_irqsave(&list_lock, flags); @@ -697,7 +697,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id) return -ENODEV; vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count, - PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); + PCI_IRQ_MSIX); if (vmd->msix_count < 0) return vmd->msix_count; @@ -755,6 +755,11 @@ static void vmd_remove(struct pci_dev *dev) static int vmd_suspend(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); + struct vmd_dev *vmd = pci_get_drvdata(pdev); + int i; + + for (i = 0; i < vmd->msix_count; i++) + devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]); pci_save_state(pdev); return 0; @@ -763,6 +768,16 @@ static int vmd_suspend(struct device *dev) static int vmd_resume(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); + struct vmd_dev *vmd = pci_get_drvdata(pdev); + int err, i; + + for (i = 0; i < vmd->msix_count; i++) { + err = devm_request_irq(dev, pci_irq_vector(pdev, i), + vmd_irq, IRQF_NO_THREAD, + "vmd", &vmd->irqs[i]); + if (err) + return err; + } pci_restore_state(pdev); return 0; diff --git a/drivers/pci/hotplug/cpcihp_zt5550.c b/drivers/pci/hotplug/cpcihp_zt5550.c index 5f49c3fd736a..2f8659a148f5 100644 --- a/drivers/pci/hotplug/cpcihp_zt5550.c +++ b/drivers/pci/hotplug/cpcihp_zt5550.c @@ -280,7 +280,7 @@ static void zt5550_hc_remove_one(struct pci_dev *pdev) } -static struct pci_device_id zt5550_hc_pci_tbl[] = { +static const struct pci_device_id zt5550_hc_pci_tbl[] = { { PCI_VENDOR_ID_ZIATECH, PCI_DEVICE_ID_ZIATECH_5550_HC, PCI_ANY_ID, PCI_ANY_ID, }, { 0, } }; diff --git a/drivers/pci/hotplug/cpqphp_core.c b/drivers/pci/hotplug/cpqphp_core.c index 33d300d12411..4d06b8461255 100644 --- a/drivers/pci/hotplug/cpqphp_core.c +++ b/drivers/pci/hotplug/cpqphp_core.c @@ -1417,7 +1417,7 @@ static void __exit unload_cpqphpd(void) iounmap(smbios_start); } -static struct pci_device_id hpcd_pci_tbl[] = { +static const struct pci_device_id hpcd_pci_tbl[] = { { /* handle any PCI Hotplug controller */ .class = ((PCI_CLASS_SYSTEM_PCI_HOTPLUG << 8) | 0x00), diff --git a/drivers/pci/hotplug/ibmphp_core.c b/drivers/pci/hotplug/ibmphp_core.c index 5efd01d84498..73cf84645c82 100644 --- a/drivers/pci/hotplug/ibmphp_core.c +++ b/drivers/pci/hotplug/ibmphp_core.c @@ -852,7 +852,7 @@ static int set_bus(struct slot *slot_cur) u8 speed; u8 cmd = 0x0; int retval; - static struct pci_device_id ciobx[] = { + static const struct pci_device_id ciobx[] = { { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS, 0x0101) }, { }, }; diff --git a/drivers/pci/hotplug/ibmphp_ebda.c b/drivers/pci/hotplug/ibmphp_ebda.c index 43e345ac296b..a6a4dac798e5 100644 --- a/drivers/pci/hotplug/ibmphp_ebda.c +++ b/drivers/pci/hotplug/ibmphp_ebda.c @@ -1153,7 +1153,7 @@ void ibmphp_free_ebda_pci_rsrc_queue(void) } } -static struct pci_device_id id_table[] = { +static const struct pci_device_id id_table[] = { { .vendor = PCI_VENDOR_ID_IBM, .device = HPC_DEVICE_ID, diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index 026830a138ae..e5d5ce9e3010 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -586,6 +586,14 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id) events = status & (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC); + + /* + * If we've already reported a power fault, don't report it again + * until we've done something to handle it. + */ + if (ctrl->power_fault_detected) + events &= ~PCI_EXP_SLTSTA_PFD; + if (!events) return IRQ_NONE; diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c index 7c203198b582..74f6a17e4614 100644 --- a/drivers/pci/hotplug/pnv_php.c +++ b/drivers/pci/hotplug/pnv_php.c @@ -163,8 +163,8 @@ static void pnv_php_detach_device_nodes(struct device_node *parent) of_node_put(dn); refcount = kref_read(&dn->kobj.kref); if (refcount != 1) - pr_warn("Invalid refcount %d on <%s>\n", - refcount, of_node_full_name(dn)); + pr_warn("Invalid refcount %d on <%pOF>\n", + refcount, dn); of_detach_node(dn); } diff --git a/drivers/pci/hotplug/rpadlpar_core.c b/drivers/pci/hotplug/rpadlpar_core.c index 3f93a4e79595..a3449d717a99 100644 --- a/drivers/pci/hotplug/rpadlpar_core.c +++ b/drivers/pci/hotplug/rpadlpar_core.c @@ -150,8 +150,8 @@ static void dlpar_pci_add_bus(struct device_node *dn) /* Add EADS device to PHB bus, adding new entry to bus->devices */ dev = of_create_pci_dev(dn, phb->bus, pdn->devfn); if (!dev) { - printk(KERN_ERR "%s: failed to create pci dev for %s\n", - __func__, dn->full_name); + printk(KERN_ERR "%s: failed to create pci dev for %pOF\n", + __func__, dn); return; } diff --git a/drivers/pci/hotplug/rpadlpar_sysfs.c b/drivers/pci/hotplug/rpadlpar_sysfs.c index a796301ea03f..edb5d8a53020 100644 --- a/drivers/pci/hotplug/rpadlpar_sysfs.c +++ b/drivers/pci/hotplug/rpadlpar_sysfs.c @@ -102,7 +102,7 @@ static struct attribute *default_attrs[] = { NULL, }; -static struct attribute_group dlpar_attr_group = { +static const struct attribute_group dlpar_attr_group = { .attrs = default_attrs, }; diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c index 8d132024f06e..1e29abaaea08 100644 --- a/drivers/pci/hotplug/rpaphp_core.c +++ b/drivers/pci/hotplug/rpaphp_core.c @@ -318,7 +318,7 @@ int rpaphp_add_slot(struct device_node *dn) if (!is_php_dn(dn, &indexes, &names, &types, &power_domains)) return 0; - dbg("Entry %s: dn->full_name=%s\n", __func__, dn->full_name); + dbg("Entry %s: dn=%pOF\n", __func__, dn); /* register PCI devices */ name = (char *) &names[1]; diff --git a/drivers/pci/hotplug/rpaphp_pci.c b/drivers/pci/hotplug/rpaphp_pci.c index ea41ea1d3c00..32aabc533be8 100644 --- a/drivers/pci/hotplug/rpaphp_pci.c +++ b/drivers/pci/hotplug/rpaphp_pci.c @@ -95,7 +95,7 @@ int rpaphp_enable_slot(struct slot *slot) bus = pci_find_bus_by_node(slot->dn); if (!bus) { - err("%s: no pci_bus for dn %s\n", __func__, slot->dn->full_name); + err("%s: no pci_bus for dn %pOF\n", __func__, slot->dn); return -EINVAL; } @@ -125,7 +125,7 @@ int rpaphp_enable_slot(struct slot *slot) if (rpaphp_debug) { struct pci_dev *dev; - dbg("%s: pci_devs of slot[%s]\n", __func__, slot->dn->full_name); + dbg("%s: pci_devs of slot[%pOF]\n", __func__, slot->dn); list_for_each_entry(dev, &bus->devices, bus_list) dbg("\t%s\n", pci_name(dev)); } diff --git a/drivers/pci/hotplug/rpaphp_slot.c b/drivers/pci/hotplug/rpaphp_slot.c index 388c4d8fcdd1..489862360f2c 100644 --- a/drivers/pci/hotplug/rpaphp_slot.c +++ b/drivers/pci/hotplug/rpaphp_slot.c @@ -122,8 +122,8 @@ int rpaphp_register_slot(struct slot *slot) int retval; int slotno = -1; - dbg("%s registering slot:path[%s] index[%x], name[%s] pdomain[%x] type[%d]\n", - __func__, slot->dn->full_name, slot->index, slot->name, + dbg("%s registering slot:path[%pOF] index[%x], name[%s] pdomain[%x] type[%d]\n", + __func__, slot->dn, slot->index, slot->name, slot->power_domain, slot->type); /* should not try to register the same slot twice */ diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c index 3454dc7385f1..7bfb87bd2b7e 100644 --- a/drivers/pci/hotplug/shpchp_core.c +++ b/drivers/pci/hotplug/shpchp_core.c @@ -351,7 +351,7 @@ static void shpc_remove(struct pci_dev *dev) kfree(ctrl); } -static struct pci_device_id shpcd_pci_tbl[] = { +static const struct pci_device_id shpcd_pci_tbl[] = { {PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0)}, { /* end: all zeroes */ } }; diff --git a/drivers/pci/hotplug/shpchp_hpc.c b/drivers/pci/hotplug/shpchp_hpc.c index de0ea474fb73..e5824c7b7b6b 100644 --- a/drivers/pci/hotplug/shpchp_hpc.c +++ b/drivers/pci/hotplug/shpchp_hpc.c @@ -1062,6 +1062,8 @@ int shpc_init(struct controller *ctrl, struct pci_dev *pdev) if (rc) { ctrl_info(ctrl, "Can't get msi for the hotplug controller\n"); ctrl_info(ctrl, "Use INTx for the hotplug controller\n"); + } else { + pci_set_master(pdev); } rc = request_irq(ctrl->pci_dev->irq, shpc_isr, IRQF_SHARED, diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index 120485d6f352..ac41c8be9200 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c @@ -331,7 +331,6 @@ failed: while (i--) pci_iov_remove_virtfn(dev, i, 0); - pcibios_sriov_disable(dev); err_pcibios: iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); pci_cfg_access_lock(dev); @@ -339,6 +338,8 @@ err_pcibios: ssleep(1); pci_cfg_access_unlock(dev); + pcibios_sriov_disable(dev); + if (iov->link != dev->devfn) sysfs_remove_link(&dev->dev.kobj, "dep_link"); @@ -357,14 +358,14 @@ static void sriov_disable(struct pci_dev *dev) for (i = 0; i < iov->num_VFs; i++) pci_iov_remove_virtfn(dev, i, 0); - pcibios_sriov_disable(dev); - iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); pci_cfg_access_lock(dev); pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); ssleep(1); pci_cfg_access_unlock(dev); + pcibios_sriov_disable(dev); + if (iov->link != dev->devfn) sysfs_remove_link(&dev->dev.kobj, "dep_link"); diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 2225afc1cbbb..496ed9130600 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -1451,13 +1451,30 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, } EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); +/* + * Users of the generic MSI infrastructure expect a device to have a single ID, + * so with DMA aliases we have to pick the least-worst compromise. Devices with + * DMA phantom functions tend to still emit MSIs from the real function number, + * so we ignore those and only consider topological aliases where either the + * alias device or RID appears on a different bus number. We also make the + * reasonable assumption that bridges are walked in an upstream direction (so + * the last one seen wins), and the much braver assumption that the most likely + * case is that of PCI->PCIe so we should always use the alias RID. This echoes + * the logic from intel_irq_remapping's set_msi_sid(), which presumably works + * well enough in practice; in the face of the horrible PCIe<->PCI-X conditions + * for taking ownership all we can really do is close our eyes and hope... + */ static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) { u32 *pa = data; + u8 bus = PCI_BUS_NUM(*pa); + + if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus) + *pa = alias; - *pa = alias; return 0; } + /** * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) * @domain: The interrupt domain @@ -1471,7 +1488,7 @@ static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) { struct device_node *of_node; - u32 rid = 0; + u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); @@ -1487,14 +1504,14 @@ u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) * @pdev: The PCI device * * Use the firmware data to find a device-specific MSI domain - * (i.e. not one that is ste as a default). + * (i.e. not one that is set as a default). * - * Returns: The coresponding MSI domain or NULL if none has been found. + * Returns: The corresponding MSI domain or NULL if none has been found. */ struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) { struct irq_domain *dom; - u32 rid = 0; + u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); dom = of_msi_map_get_device_domain(&pdev->dev, rid); diff --git a/drivers/pci/pci-label.c b/drivers/pci/pci-label.c index a7a41d9c29df..7e9e79575d93 100644 --- a/drivers/pci/pci-label.c +++ b/drivers/pci/pci-label.c @@ -123,7 +123,7 @@ static struct attribute *smbios_attributes[] = { NULL, }; -static struct attribute_group smbios_attr_group = { +static const struct attribute_group smbios_attr_group = { .attrs = smbios_attributes, .is_visible = smbios_instance_string_exist, }; @@ -260,7 +260,7 @@ static struct attribute *acpi_attributes[] = { NULL, }; -static struct attribute_group acpi_attr_group = { +static const struct attribute_group acpi_attr_group = { .attrs = acpi_attributes, .is_visible = acpi_index_string_exist, }; diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 2f3780b50723..1eecfa301f7f 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -556,9 +556,9 @@ static ssize_t devspec_show(struct device *dev, struct pci_dev *pdev = to_pci_dev(dev); struct device_node *np = pci_device_to_OF_node(pdev); - if (np == NULL || np->full_name == NULL) + if (np == NULL) return 0; - return sprintf(buf, "%s", np->full_name); + return sprintf(buf, "%pOF", np); } static DEVICE_ATTR_RO(devspec); #endif @@ -1211,11 +1211,8 @@ static ssize_t pci_resource_io(struct file *filp, struct kobject *kobj, { struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); int bar = (unsigned long)attr->private; - struct resource *res; unsigned long port = off; - res = &pdev->resource[bar]; - port += pci_resource_start(pdev, bar); if (port > pci_resource_end(pdev, bar)) @@ -1431,7 +1428,7 @@ static ssize_t pci_read_rom(struct file *filp, struct kobject *kobj, return count; } -static struct bin_attribute pci_config_attr = { +static const struct bin_attribute pci_config_attr = { .attr = { .name = "config", .mode = S_IRUGO | S_IWUSR, @@ -1441,7 +1438,7 @@ static struct bin_attribute pci_config_attr = { .write = pci_write_config, }; -static struct bin_attribute pcie_config_attr = { +static const struct bin_attribute pcie_config_attr = { .attr = { .name = "config", .mode = S_IRUGO | S_IWUSR, @@ -1735,7 +1732,7 @@ const struct attribute_group *pcie_dev_groups[] = { NULL, }; -static struct attribute_group pci_dev_hp_attr_group = { +static const struct attribute_group pci_dev_hp_attr_group = { .attrs = pci_dev_hp_attrs, .is_visible = pci_dev_hp_attrs_are_visible, }; @@ -1759,23 +1756,23 @@ static umode_t sriov_attrs_are_visible(struct kobject *kobj, return a->mode; } -static struct attribute_group sriov_dev_attr_group = { +static const struct attribute_group sriov_dev_attr_group = { .attrs = sriov_dev_attrs, .is_visible = sriov_attrs_are_visible, }; #endif /* CONFIG_PCI_IOV */ -static struct attribute_group pci_dev_attr_group = { +static const struct attribute_group pci_dev_attr_group = { .attrs = pci_dev_dev_attrs, .is_visible = pci_dev_attrs_are_visible, }; -static struct attribute_group pci_bridge_attr_group = { +static const struct attribute_group pci_bridge_attr_group = { .attrs = pci_bridge_attrs, .is_visible = pci_bridge_attrs_are_visible, }; -static struct attribute_group pcie_dev_attr_group = { +static const struct attribute_group pcie_dev_attr_group = { .attrs = pcie_dev_attrs, .is_visible = pcie_dev_attrs_are_visible, }; diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 68e3b2b0da93..b0002daa50f3 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -52,6 +52,7 @@ static void pci_pme_list_scan(struct work_struct *work); static LIST_HEAD(pci_pme_list); static DEFINE_MUTEX(pci_pme_list_mutex); static DECLARE_DELAYED_WORK(pci_pme_work, pci_pme_list_scan); +static DEFINE_MUTEX(pci_bridge_mutex); struct pci_pme_device { struct list_head list; @@ -892,7 +893,9 @@ EXPORT_SYMBOL_GPL(__pci_complete_power_transition); * -EINVAL if the requested state is invalid. * -EIO if device does not support PCI PM or its PM capabilities register has a * wrong version, or device doesn't support the requested state. + * 0 if the transition is to D1 or D2 but D1 and D2 are not supported. * 0 if device already is in the requested state. + * 0 if the transition is to D3 but D3 is not supported. * 0 if device's power state has been successfully changed. */ int pci_set_power_state(struct pci_dev *dev, pci_power_t state) @@ -1348,10 +1351,16 @@ static void pci_enable_bridge(struct pci_dev *dev) if (bridge) pci_enable_bridge(bridge); + /* + * Hold pci_bridge_mutex to prevent a race when enabling two + * devices below the bridge simultaneously. The race may cause a + * PCI_COMMAND_MEMORY update to be lost (see changelog). + */ + mutex_lock(&pci_bridge_mutex); if (pci_is_enabled(dev)) { if (!dev->is_busmaster) pci_set_master(dev); - return; + goto end; } retval = pci_enable_device(dev); @@ -1359,6 +1368,8 @@ static void pci_enable_bridge(struct pci_dev *dev) dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n", retval); pci_set_master(dev); +end: + mutex_unlock(&pci_bridge_mutex); } static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags) @@ -1383,7 +1394,7 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags) return 0; /* already enabled */ bridge = pci_upstream_bridge(dev); - if (bridge) + if (bridge && !pci_is_enabled(bridge)) pci_enable_bridge(bridge); /* only skip sriov related */ @@ -3818,27 +3829,49 @@ int pci_wait_for_pending_transaction(struct pci_dev *dev) } EXPORT_SYMBOL(pci_wait_for_pending_transaction); -/* - * We should only need to wait 100ms after FLR, but some devices take longer. - * Wait for up to 1000ms for config space to return something other than -1. - * Intel IGD requires this when an LCD panel is attached. We read the 2nd - * dword because VFs don't implement the 1st dword. - */ static void pci_flr_wait(struct pci_dev *dev) { - int i = 0; + int delay = 1, timeout = 60000; u32 id; - do { - msleep(100); + /* + * Per PCIe r3.1, sec 6.6.2, a device must complete an FLR within + * 100ms, but may silently discard requests while the FLR is in + * progress. Wait 100ms before trying to access the device. + */ + msleep(100); + + /* + * After 100ms, the device should not silently discard config + * requests, but it may still indicate that it needs more time by + * responding to them with CRS completions. The Root Port will + * generally synthesize ~0 data to complete the read (except when + * CRS SV is enabled and the read was for the Vendor ID; in that + * case it synthesizes 0x0001 data). + * + * Wait for the device to return a non-CRS completion. Read the + * Command register instead of Vendor ID so we don't have to + * contend with the CRS SV value. + */ + pci_read_config_dword(dev, PCI_COMMAND, &id); + while (id == ~0) { + if (delay > timeout) { + dev_warn(&dev->dev, "not ready %dms after FLR; giving up\n", + 100 + delay - 1); + return; + } + + if (delay > 1000) + dev_info(&dev->dev, "not ready %dms after FLR; waiting\n", + 100 + delay - 1); + + msleep(delay); + delay *= 2; pci_read_config_dword(dev, PCI_COMMAND, &id); - } while (i++ < 10 && id == ~0); + } - if (id == ~0) - dev_warn(&dev->dev, "Failed to return from FLR\n"); - else if (i > 1) - dev_info(&dev->dev, "Required additional %dms to return from FLR\n", - (i - 1) * 100); + if (delay > 1000) + dev_info(&dev->dev, "ready %dms after FLR\n", 100 + delay - 1); } /** @@ -5405,8 +5438,8 @@ static int of_pci_bus_find_domain_nr(struct device *parent) use_dt_domains = 0; domain = pci_get_new_domain_nr(); } else { - dev_err(parent, "Node %s has inconsistent \"linux,pci-domain\" property in DT\n", - parent->of_node->full_name); + dev_err(parent, "Node %pOF has inconsistent \"linux,pci-domain\" property in DT\n", + parent->of_node); domain = -1; } diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 22e061738c6f..a6560c9baa52 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -235,6 +235,7 @@ enum pci_bar_type { pci_bar_mem64, /* A 64-bit memory BAR */ }; +int pci_configure_extended_tags(struct pci_dev *dev, void *ign); bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, int crs_timeout); int pci_setup_device(struct pci_dev *dev); diff --git a/drivers/pci/pcie/aer/aerdrv.c b/drivers/pci/pcie/aer/aerdrv.c index dea186a9d6b6..6ff5f5b4f5e6 100644 --- a/drivers/pci/pcie/aer/aerdrv.c +++ b/drivers/pci/pcie/aer/aerdrv.c @@ -32,16 +32,9 @@ static int aer_probe(struct pcie_device *dev); static void aer_remove(struct pcie_device *dev); -static pci_ers_result_t aer_error_detected(struct pci_dev *dev, - enum pci_channel_state error); static void aer_error_resume(struct pci_dev *dev); static pci_ers_result_t aer_root_reset(struct pci_dev *dev); -static const struct pci_error_handlers aer_error_handlers = { - .error_detected = aer_error_detected, - .resume = aer_error_resume, -}; - static struct pcie_port_service_driver aerdriver = { .name = "aer", .port_type = PCI_EXP_TYPE_ROOT_PORT, @@ -49,9 +42,7 @@ static struct pcie_port_service_driver aerdriver = { .probe = aer_probe, .remove = aer_remove, - - .err_handler = &aer_error_handlers, - + .error_resume = aer_error_resume, .reset_link = aer_root_reset, }; @@ -350,20 +341,6 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) } /** - * aer_error_detected - update severity status - * @dev: pointer to Root Port's pci_dev data structure - * @error: error severity being notified by port bus - * - * Invoked by Port Bus driver during error recovery. - */ -static pci_ers_result_t aer_error_detected(struct pci_dev *dev, - enum pci_channel_state error) -{ - /* Root Port has no impact. Always recovers. */ - return PCI_ERS_RESULT_CAN_RECOVER; -} - -/** * aer_error_resume - clean up corresponding error status bits * @dev: pointer to Root Port's pci_dev data structure * diff --git a/drivers/pci/pcie/aer/aerdrv_core.c b/drivers/pci/pcie/aer/aerdrv_core.c index b1303b32053f..890efcc574cb 100644 --- a/drivers/pci/pcie/aer/aerdrv_core.c +++ b/drivers/pci/pcie/aer/aerdrv_core.c @@ -5,10 +5,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * This file implements the core part of PCI-Express AER. When an pci-express + * This file implements the core part of PCIe AER. When a PCIe * error is delivered, an error message will be collected and printed to * console, then, an error recovery procedure will be executed by following - * the pci error recovery rules. + * the PCI error recovery rules. * * Copyright (C) 2006 Intel Corp. * Tom Long Nguyen (tom.l.nguyen@intel.com) diff --git a/drivers/pci/pcie/pcie-dpc.c b/drivers/pci/pcie/pcie-dpc.c index c39f32e42b4d..2d976a623ddc 100644 --- a/drivers/pci/pcie/pcie-dpc.c +++ b/drivers/pci/pcie/pcie-dpc.c @@ -16,17 +16,62 @@ #include <linux/pcieport_if.h> #include "../pci.h" +struct rp_pio_header_log_regs { + u32 dw0; + u32 dw1; + u32 dw2; + u32 dw3; +}; + +struct dpc_rp_pio_regs { + u32 status; + u32 mask; + u32 severity; + u32 syserror; + u32 exception; + + struct rp_pio_header_log_regs header_log; + u32 impspec_log; + u32 tlp_prefix_log[4]; + u32 log_size; + u16 first_error; +}; + struct dpc_dev { struct pcie_device *dev; struct work_struct work; int cap_pos; bool rp; + u32 rp_pio_status; +}; + +static const char * const rp_pio_error_string[] = { + "Configuration Request received UR Completion", /* Bit Position 0 */ + "Configuration Request received CA Completion", /* Bit Position 1 */ + "Configuration Request Completion Timeout", /* Bit Position 2 */ + NULL, + NULL, + NULL, + NULL, + NULL, + "I/O Request received UR Completion", /* Bit Position 8 */ + "I/O Request received CA Completion", /* Bit Position 9 */ + "I/O Request Completion Timeout", /* Bit Position 10 */ + NULL, + NULL, + NULL, + NULL, + NULL, + "Memory Request received UR Completion", /* Bit Position 16 */ + "Memory Request received CA Completion", /* Bit Position 17 */ + "Memory Request Completion Timeout", /* Bit Position 18 */ }; static int dpc_wait_rp_inactive(struct dpc_dev *dpc) { unsigned long timeout = jiffies + HZ; struct pci_dev *pdev = dpc->dev->port; + struct device *dev = &dpc->dev->device; u16 status; pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); @@ -36,15 +81,17 @@ static int dpc_wait_rp_inactive(struct dpc_dev *dpc) pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); } if (status & PCI_EXP_DPC_RP_BUSY) { - dev_warn(&pdev->dev, "DPC root port still busy\n"); + dev_warn(dev, "DPC root port still busy\n"); return -EBUSY; } return 0; } -static void dpc_wait_link_inactive(struct pci_dev *pdev) +static void dpc_wait_link_inactive(struct dpc_dev *dpc) { unsigned long timeout = jiffies + HZ; + struct pci_dev *pdev = dpc->dev->port; + struct device *dev = &dpc->dev->device; u16 lnk_status; pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); @@ -54,7 +101,7 @@ static void dpc_wait_link_inactive(struct pci_dev *pdev) pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); } if (lnk_status & PCI_EXP_LNKSTA_DLLLA) - dev_warn(&pdev->dev, "Link state not disabled for DPC event\n"); + dev_warn(dev, "Link state not disabled for DPC event\n"); } static void interrupt_event_handler(struct work_struct *work) @@ -76,17 +123,132 @@ static void interrupt_event_handler(struct work_struct *work) } pci_unlock_rescan_remove(); - dpc_wait_link_inactive(pdev); + dpc_wait_link_inactive(dpc); if (dpc->rp && dpc_wait_rp_inactive(dpc)) return; + if (dpc->rp && dpc->rp_pio_status) { + pci_write_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_STATUS, + dpc->rp_pio_status); + dpc->rp_pio_status = 0; + } + pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT); } +static void dpc_rp_pio_print_tlp_header(struct device *dev, + struct rp_pio_header_log_regs *t) +{ + dev_err(dev, "TLP Header: %#010x %#010x %#010x %#010x\n", + t->dw0, t->dw1, t->dw2, t->dw3); +} + +static void dpc_rp_pio_print_error(struct dpc_dev *dpc, + struct dpc_rp_pio_regs *rp_pio) +{ + struct device *dev = &dpc->dev->device; + int i; + u32 status; + + dev_err(dev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", + rp_pio->status, rp_pio->mask); + + dev_err(dev, "RP PIO severity=%#010x, syserror=%#010x, exception=%#010x\n", + rp_pio->severity, rp_pio->syserror, rp_pio->exception); + + status = (rp_pio->status & ~rp_pio->mask); + + for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { + if (!(status & (1 << i))) + continue; + + dev_err(dev, "[%2d] %s%s\n", i, rp_pio_error_string[i], + rp_pio->first_error == i ? " (First)" : ""); + } + + dpc_rp_pio_print_tlp_header(dev, &rp_pio->header_log); + if (rp_pio->log_size == 4) + return; + dev_err(dev, "RP PIO ImpSpec Log %#010x\n", rp_pio->impspec_log); + + for (i = 0; i < rp_pio->log_size - 5; i++) + dev_err(dev, "TLP Prefix Header: dw%d, %#010x\n", i, + rp_pio->tlp_prefix_log[i]); +} + +static void dpc_rp_pio_get_info(struct dpc_dev *dpc, + struct dpc_rp_pio_regs *rp_pio) +{ + struct pci_dev *pdev = dpc->dev->port; + struct device *dev = &dpc->dev->device; + int i; + u16 cap; + u16 status; + + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_STATUS, + &rp_pio->status); + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_MASK, + &rp_pio->mask); + + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_SEVERITY, + &rp_pio->severity); + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_SYSERROR, + &rp_pio->syserror); + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_EXCEPTION, + &rp_pio->exception); + + /* Get First Error Pointer */ + pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); + rp_pio->first_error = (status & 0x1f00) >> 8; + + pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap); + rp_pio->log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; + if (rp_pio->log_size < 4 || rp_pio->log_size > 9) { + dev_err(dev, "RP PIO log size %u is invalid\n", + rp_pio->log_size); + return; + } + + pci_read_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG, + &rp_pio->header_log.dw0); + pci_read_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 4, + &rp_pio->header_log.dw1); + pci_read_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 8, + &rp_pio->header_log.dw2); + pci_read_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 12, + &rp_pio->header_log.dw3); + if (rp_pio->log_size == 4) + return; + + pci_read_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, + &rp_pio->impspec_log); + for (i = 0; i < rp_pio->log_size - 5; i++) + pci_read_config_dword(pdev, + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, + &rp_pio->tlp_prefix_log[i]); +} + +static void dpc_process_rp_pio_error(struct dpc_dev *dpc) +{ + struct dpc_rp_pio_regs rp_pio_regs; + + dpc_rp_pio_get_info(dpc, &rp_pio_regs); + dpc_rp_pio_print_error(dpc, &rp_pio_regs); + + dpc->rp_pio_status = rp_pio_regs.status; +} + static irqreturn_t dpc_irq(int irq, void *context) { struct dpc_dev *dpc = (struct dpc_dev *)context; struct pci_dev *pdev = dpc->dev->port; + struct device *dev = &dpc->dev->device; u16 status, source; pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); @@ -95,20 +257,24 @@ static irqreturn_t dpc_irq(int irq, void *context) if (!status || status == (u16)(~0)) return IRQ_NONE; - dev_info(&dpc->dev->device, "DPC containment event, status:%#06x source:%#06x\n", + dev_info(dev, "DPC containment event, status:%#06x source:%#06x\n", status, source); if (status & PCI_EXP_DPC_STATUS_TRIGGER) { u16 reason = (status >> 1) & 0x3; u16 ext_reason = (status >> 5) & 0x3; - dev_warn(&dpc->dev->device, "DPC %s detected, remove downstream devices\n", + dev_warn(dev, "DPC %s detected, remove downstream devices\n", (reason == 0) ? "unmasked uncorrectable error" : (reason == 1) ? "ERR_NONFATAL" : (reason == 2) ? "ERR_FATAL" : (ext_reason == 0) ? "RP PIO error" : (ext_reason == 1) ? "software trigger" : "reserved error"); + /* show RP PIO error detail information */ + if (reason == 3 && ext_reason == 0) + dpc_process_rp_pio_error(dpc); + schedule_work(&dpc->work); } return IRQ_HANDLED; @@ -119,10 +285,11 @@ static int dpc_probe(struct pcie_device *dev) { struct dpc_dev *dpc; struct pci_dev *pdev = dev->port; + struct device *device = &dev->device; int status; u16 ctl, cap; - dpc = devm_kzalloc(&dev->device, sizeof(*dpc), GFP_KERNEL); + dpc = devm_kzalloc(device, sizeof(*dpc), GFP_KERNEL); if (!dpc) return -ENOMEM; @@ -131,10 +298,10 @@ static int dpc_probe(struct pcie_device *dev) INIT_WORK(&dpc->work, interrupt_event_handler); set_service_data(dev, dpc); - status = devm_request_irq(&dev->device, dev->irq, dpc_irq, IRQF_SHARED, + status = devm_request_irq(device, dev->irq, dpc_irq, IRQF_SHARED, "pcie-dpc", dpc); if (status) { - dev_warn(&dev->device, "request IRQ%d failed: %d\n", dev->irq, + dev_warn(device, "request IRQ%d failed: %d\n", dev->irq, status); return status; } @@ -147,7 +314,7 @@ static int dpc_probe(struct pcie_device *dev) ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN; pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); - dev_info(&dev->device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", + dev_info(device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", cap & 0xf, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), (cap >> 8) & 0xf, diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index 8aa3f14bc87d..be635f017756 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c @@ -21,7 +21,6 @@ #include "../pci.h" #include "portdrv.h" -#include "aer/aerdrv.h" /* If this switch is set, PCIe port native services should not be enabled. */ bool pcie_ports_disabled; @@ -177,108 +176,20 @@ static void pcie_portdrv_remove(struct pci_dev *dev) pcie_port_device_remove(dev); } -static int error_detected_iter(struct device *device, void *data) -{ - struct pcie_device *pcie_device; - struct pcie_port_service_driver *driver; - struct aer_broadcast_data *result_data; - pci_ers_result_t status; - - result_data = (struct aer_broadcast_data *) data; - - if (device->bus == &pcie_port_bus_type && device->driver) { - driver = to_service_driver(device->driver); - if (!driver || - !driver->err_handler || - !driver->err_handler->error_detected) - return 0; - - pcie_device = to_pcie_device(device); - - /* Forward error detected message to service drivers */ - status = driver->err_handler->error_detected( - pcie_device->port, - result_data->state); - result_data->result = - merge_result(result_data->result, status); - } - - return 0; -} - static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev, enum pci_channel_state error) { - struct aer_broadcast_data data = {error, PCI_ERS_RESULT_CAN_RECOVER}; - - /* get true return value from &data */ - device_for_each_child(&dev->dev, &data, error_detected_iter); - return data.result; -} - -static int mmio_enabled_iter(struct device *device, void *data) -{ - struct pcie_device *pcie_device; - struct pcie_port_service_driver *driver; - pci_ers_result_t status, *result; - - result = (pci_ers_result_t *) data; - - if (device->bus == &pcie_port_bus_type && device->driver) { - driver = to_service_driver(device->driver); - if (driver && - driver->err_handler && - driver->err_handler->mmio_enabled) { - pcie_device = to_pcie_device(device); - - /* Forward error message to service drivers */ - status = driver->err_handler->mmio_enabled( - pcie_device->port); - *result = merge_result(*result, status); - } - } - - return 0; + /* Root Port has no impact. Always recovers. */ + return PCI_ERS_RESULT_CAN_RECOVER; } static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) { - pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; - - /* get true return value from &status */ - device_for_each_child(&dev->dev, &status, mmio_enabled_iter); - return status; -} - -static int slot_reset_iter(struct device *device, void *data) -{ - struct pcie_device *pcie_device; - struct pcie_port_service_driver *driver; - pci_ers_result_t status, *result; - - result = (pci_ers_result_t *) data; - - if (device->bus == &pcie_port_bus_type && device->driver) { - driver = to_service_driver(device->driver); - if (driver && - driver->err_handler && - driver->err_handler->slot_reset) { - pcie_device = to_pcie_device(device); - - /* Forward error message to service drivers */ - status = driver->err_handler->slot_reset( - pcie_device->port); - *result = merge_result(*result, status); - } - } - - return 0; + return PCI_ERS_RESULT_RECOVERED; } static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) { - pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; - /* If fatal, restore cfg space for possible link reset at upstream */ if (dev->error_state == pci_channel_io_frozen) { dev->state_saved = true; @@ -287,9 +198,7 @@ static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) pci_enable_pcie_error_reporting(dev); } - /* get true return value from &status */ - device_for_each_child(&dev->dev, &status, slot_reset_iter); - return status; + return PCI_ERS_RESULT_RECOVERED; } static int resume_iter(struct device *device, void *data) @@ -299,13 +208,11 @@ static int resume_iter(struct device *device, void *data) if (device->bus == &pcie_port_bus_type && device->driver) { driver = to_service_driver(device->driver); - if (driver && - driver->err_handler && - driver->err_handler->resume) { + if (driver && driver->error_resume) { pcie_device = to_pcie_device(device); /* Forward error message to service drivers */ - driver->err_handler->resume(pcie_device->port); + driver->error_resume(pcie_device->port); } } diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index e6a917b4acd3..ff94b69738a8 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -1745,21 +1745,50 @@ static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp) */ } -static void pci_configure_extended_tags(struct pci_dev *dev) +int pci_configure_extended_tags(struct pci_dev *dev, void *ign) { - u32 dev_cap; + struct pci_host_bridge *host; + u32 cap; + u16 ctl; int ret; if (!pci_is_pcie(dev)) - return; + return 0; - ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &dev_cap); + ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap); if (ret) - return; + return 0; + + if (!(cap & PCI_EXP_DEVCAP_EXT_TAG)) + return 0; + + ret = pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl); + if (ret) + return 0; + + host = pci_find_host_bridge(dev->bus); + if (!host) + return 0; - if (dev_cap & PCI_EXP_DEVCAP_EXT_TAG) + /* + * If some device in the hierarchy doesn't handle Extended Tags + * correctly, make sure they're disabled. + */ + if (host->no_ext_tags) { + if (ctl & PCI_EXP_DEVCTL_EXT_TAG) { + dev_info(&dev->dev, "disabling Extended Tags\n"); + pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, + PCI_EXP_DEVCTL_EXT_TAG); + } + return 0; + } + + if (!(ctl & PCI_EXP_DEVCTL_EXT_TAG)) { + dev_info(&dev->dev, "enabling Extended Tags\n"); pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_EXT_TAG); + } + return 0; } /** @@ -1810,7 +1839,7 @@ static void pci_configure_device(struct pci_dev *dev) int ret; pci_configure_mps(dev); - pci_configure_extended_tags(dev); + pci_configure_extended_tags(dev, NULL); pci_configure_relaxed_ordering(dev); memset(&hpp, 0, sizeof(hpp)); @@ -1867,42 +1896,69 @@ struct pci_dev *pci_alloc_dev(struct pci_bus *bus) } EXPORT_SYMBOL(pci_alloc_dev); -bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, - int crs_timeout) +static bool pci_bus_crs_vendor_id(u32 l) +{ + return (l & 0xffff) == 0x0001; +} + +static bool pci_bus_wait_crs(struct pci_bus *bus, int devfn, u32 *l, + int timeout) { int delay = 1; - if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) - return false; + if (!pci_bus_crs_vendor_id(*l)) + return true; /* not a CRS completion */ - /* some broken boards return 0 or ~0 if a slot is empty: */ - if (*l == 0xffffffff || *l == 0x00000000 || - *l == 0x0000ffff || *l == 0xffff0000) - return false; + if (!timeout) + return false; /* CRS, but caller doesn't want to wait */ /* - * Configuration Request Retry Status. Some root ports return the - * actual device ID instead of the synthetic ID (0xFFFF) required - * by the PCIe spec. Ignore the device ID and only check for - * (vendor id == 1). + * We got the reserved Vendor ID that indicates a completion with + * Configuration Request Retry Status (CRS). Retry until we get a + * valid Vendor ID or we time out. */ - while ((*l & 0xffff) == 0x0001) { - if (!crs_timeout) + while (pci_bus_crs_vendor_id(*l)) { + if (delay > timeout) { + pr_warn("pci %04x:%02x:%02x.%d: not ready after %dms; giving up\n", + pci_domain_nr(bus), bus->number, + PCI_SLOT(devfn), PCI_FUNC(devfn), delay - 1); + return false; + } + if (delay >= 1000) + pr_info("pci %04x:%02x:%02x.%d: not ready after %dms; waiting\n", + pci_domain_nr(bus), bus->number, + PCI_SLOT(devfn), PCI_FUNC(devfn), delay - 1); msleep(delay); delay *= 2; + if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) return false; - /* Card hasn't responded in 60 seconds? Must be stuck. */ - if (delay > crs_timeout) { - printk(KERN_WARNING "pci %04x:%02x:%02x.%d: not responding\n", - pci_domain_nr(bus), bus->number, PCI_SLOT(devfn), - PCI_FUNC(devfn)); - return false; - } } + if (delay >= 1000) + pr_info("pci %04x:%02x:%02x.%d: ready after %dms\n", + pci_domain_nr(bus), bus->number, + PCI_SLOT(devfn), PCI_FUNC(devfn), delay - 1); + + return true; +} + +bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, + int timeout) +{ + if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) + return false; + + /* some broken boards return 0 or ~0 if a slot is empty: */ + if (*l == 0xffffffff || *l == 0x00000000 || + *l == 0x0000ffff || *l == 0xffff0000) + return false; + + if (pci_bus_crs_vendor_id(*l)) + return pci_bus_wait_crs(bus, devfn, l, timeout); + return true; } EXPORT_SYMBOL(pci_bus_read_dev_vendor_id); @@ -2331,6 +2387,15 @@ void pcie_bus_configure_settings(struct pci_bus *bus) } EXPORT_SYMBOL_GPL(pcie_bus_configure_settings); +/* + * Called after each bus is probed, but before its children are examined. This + * is marked as __weak because multiple architectures define it. + */ +void __weak pcibios_fixup_bus(struct pci_bus *bus) +{ + /* nothing to do, expected to be removed in the future */ +} + unsigned int pci_scan_child_bus(struct pci_bus *bus) { unsigned int devfn, pass, max = bus->busn_res.start; diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index a346487a9532..a2afb44fad10 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -2062,7 +2062,7 @@ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, /* * The 82575 and 82598 may experience data corruption issues when transitioning - * out of L0S. To prevent this we need to disable L0S on the pci-e link + * out of L0S. To prevent this we need to disable L0S on the PCIe link. */ static void quirk_disable_aspm_l0s(struct pci_dev *dev) { @@ -4227,6 +4227,18 @@ static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags) return acs_flags ? 0 : 1; } +static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags) +{ + /* + * X-Gene root matching this quirk do not allow peer-to-peer + * transactions with others, allowing masking out these bits as if they + * were unimplemented in the ACS capability. + */ + acs_flags &= ~(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); + + return acs_flags ? 0 : 1; +} + /* * Many Intel PCH root ports do provide ACS-like features to disable peer * transactions and validate bus numbers in requests, but do not provide an @@ -4475,6 +4487,8 @@ static const struct pci_dev_acs_enabled { { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ /* Cavium ThunderX */ { PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs }, + /* APM X-Gene */ + { PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs }, { 0 } }; @@ -4747,23 +4761,6 @@ static void quirk_intel_qat_vf_cap(struct pci_dev *pdev) } DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap); -/* - * VMD-enabled root ports will change the source ID for all messages - * to the VMD device. Rather than doing device matching with the source - * ID, the AER driver should traverse the child device tree, reading - * AER registers to find the faulting device. - */ -static void quirk_no_aersid(struct pci_dev *pdev) -{ - /* VMD Domain */ - if (pdev->bus->sysdata && pci_domain_nr(pdev->bus) >= 0x10000) - pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID; -} -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid); -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid); - /* FLR may cause some 82579 devices to hang. */ static void quirk_intel_no_flr(struct pci_dev *dev) { @@ -4771,3 +4768,34 @@ static void quirk_intel_no_flr(struct pci_dev *dev) } DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_intel_no_flr); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_intel_no_flr); + +static void quirk_no_ext_tags(struct pci_dev *pdev) +{ + struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus); + + if (!bridge) + return; + + bridge->no_ext_tags = 1; + dev_info(&pdev->dev, "disabling Extended Tags (this device can't handle them)\n"); + + pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL); +} +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0142, quirk_no_ext_tags); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0144, quirk_no_ext_tags); + +#ifdef CONFIG_PCI_ATS +/* + * Some devices have a broken ATS implementation causing IOMMU stalls. + * Don't use ATS for those devices. + */ +static void quirk_no_ats(struct pci_dev *pdev) +{ + dev_info(&pdev->dev, "disabling ATS (broken on this device)\n"); + pdev->ats_cap = 0; +} + +/* AMD Stoney platform GPU */ +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats); +#endif /* CONFIG_PCI_ATS */ diff --git a/drivers/pci/setup-irq.c b/drivers/pci/setup-irq.c index 81eda3d93a5d..86106c44ce94 100644 --- a/drivers/pci/setup-irq.c +++ b/drivers/pci/setup-irq.c @@ -17,12 +17,6 @@ #include <linux/cache.h> #include "pci.h" -void __weak pcibios_update_irq(struct pci_dev *dev, int irq) -{ - dev_dbg(&dev->dev, "assigning IRQ %02d\n", irq); - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); -} - void pci_assign_irq(struct pci_dev *dev) { u8 pin; @@ -65,29 +59,5 @@ void pci_assign_irq(struct pci_dev *dev) /* Always tell the device, so the driver knows what is the real IRQ to use; the device does not use it. */ - pcibios_update_irq(dev, irq); -} - -void pci_fixup_irqs(u8 (*swizzle)(struct pci_dev *, u8 *), - int (*map_irq)(const struct pci_dev *, u8, u8)) -{ - /* - * Implement pci_fixup_irqs() through pci_assign_irq(). - * This code should be remove eventually, it is a wrapper - * around pci_assign_irq() interface to keep current - * pci_fixup_irqs() behaviour unchanged on architecture - * code still relying on its interface. - */ - struct pci_dev *dev = NULL; - struct pci_host_bridge *hbrg = NULL; - - for_each_pci_dev(dev) { - hbrg = pci_find_host_bridge(dev->bus); - hbrg->swizzle_irq = swizzle; - hbrg->map_irq = map_irq; - pci_assign_irq(dev); - hbrg->swizzle_irq = NULL; - hbrg->map_irq = NULL; - } + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); } -EXPORT_SYMBOL_GPL(pci_fixup_irqs); diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c index 85774b7a316a..e576e1a8d978 100644 --- a/drivers/pci/setup-res.c +++ b/drivers/pci/setup-res.c @@ -234,6 +234,19 @@ static int pci_revert_fw_address(struct resource *res, struct pci_dev *dev, return 0; } +/* + * We don't have to worry about legacy ISA devices, so nothing to do here. + * This is marked as __weak because multiple architectures define it; it should + * eventually go away. + */ +resource_size_t __weak pcibios_align_resource(void *data, + const struct resource *res, + resource_size_t size, + resource_size_t align) +{ + return res->start; +} + static int __pci_assign_resource(struct pci_bus *bus, struct pci_dev *dev, int resno, resource_size_t size, resource_size_t align) { |