diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-07-01 01:06:45 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-07-01 01:06:45 +0300 |
commit | 9070577ae9d6065e447d422bdf85a09f89eaa9e8 (patch) | |
tree | 329b4b63b5720ba649c88ceeb9f6548ab55dd569 /drivers/pci | |
parent | 28968f384be3c064d66954aac4c534a5e76bf973 (diff) | |
parent | 6ecac465eee887de7ceda7ffe3bccf538eb786bc (diff) | |
download | linux-9070577ae9d6065e447d422bdf85a09f89eaa9e8.tar.xz |
Merge tag 'pci-v6.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci
Pull pci updates from Bjorn Helgaas:
"Enumeration:
- Export pcie_retrain_link() for use outside ASPM
- Add Data Link Layer Link Active Reporting as another way for
pcie_retrain_link() to determine the link is up
- Work around link training failures (especially on the ASMedia
ASM2824 switch) by training first at 2.5GT/s and then attempting
higher rates
Resource management:
- When we coalesce host bridge windows, remove invalidated resources
from the resource tree so future allocations work correctly
Hotplug:
- Cancel bringup sequence if card is not present, to keep from
blinking Power Indicator indefinitely
- Reassign bridge resources if necessary for ACPI hotplug
Driver binding:
- Convert platform_device .remove() callbacks to return void instead
of a mostly useless int
Power management:
- Reduce wait time for secondary bus to be ready to speed up resume
- Avoid putting EloPOS E2/S2/H2 (as well as Elo i2) PCIe Ports in
D3cold
- Call _REG when transitioning D-states so AML that uses the PCI
config space OpRegion works, which fixes some ASMedia GPIO
controllers after resume
Virtualization:
- Delay extra 250ms after FLR of Solidigm P44 Pro NVMe to avoid KVM
hang when guest is rebooted
- Add function 1 DMA alias quirk for Marvell 88SE9235
Error handling:
- Unexport pci_save_aer_state() since it's only used in drivers/pci/
- Drop recommendation for drivers to configure AER Capability, since
the PCI core does this for all devices
ASPM:
- Disable ASPM on MFD function removal to avoid use-after-free
- Tighten up pci_enable_link_state() and pci_disable_link_state()
interfaces so they don't enable/disable states the driver didn't
specify
- Avoid link retraining race that can happen if ASPM sets link
control parameters while the link is in the midst of training for
some other reason
Endpoint framework:
- Change "PCI Endpoint Virtual NTB driver" Kconfig prompt to be
different from "PCI Endpoint NTB driver"
- Automatically create a function specific attributes group for
endpoint drivers to avoid reference counting issues
- Fix many EPC test issues
- Return pci_epf_type_add_cfs() error if EPF has no driver
- Add kernel-doc for pci_epc_raise_irq() and pci_epc_map_msi_irq()
MSI vector parameters
- Pass EPF device ID to driver probe functions
- Return -EALREADY if EPC has already been started/stopped
- Add linkdown notifier support and use it in qcom-ep
- Add Bus Master Enable event support and use it in qcom-ep
- Add Qualcomm Modem Host Interface (MHI) endpoint driver
- Add Layerscape PME interrupt handling to manage link-up
notification
Cadence PCIe controller driver:
- Wait for link retrain to complete when working around the J721E
i2085 erratum with Gen2 mode
Faraday FTPC100 PCI controller driver:
- Release clock resources on error paths
Freescale i.MX6 PCIe controller driver:
- Save and restore Root Port MSI control to work around hardware defect
Intel VMD host bridge driver:
- Reset VMD config register between soft reboots
- Capture pci_reset_bus() return value instead of printing junk when
it fails
Qualcomm PCIe controller driver:
- Add SDX65 endpoint compatible string to DT binding
- Disable register write access after init for IP v2.3.3, v2.9.0
- Use DWC helpers for enabling/disabling writes to DBI registers
- Hide slot hotplug capability for IP v1.0.0, v1.9.0, v2.1.0, v2.3.2,
v2.3.3, v2.7.0, v2.9.0
- Reuse v2.3.2 post-init sequence for v2.4.0
Renesas R-Car PCIe controller driver:
- Remove unused static pcie_base and pcie_dev
Rockchip PCIe controller driver:
- Remove writes to unused registers
- Write endpoint Device ID using correct register
- Assert PCI Configuration Enable bit after probe so endpoint
responds instead of generating Request Retry Status messages
- Poll waiting for PHY PLLs to lock
- Update RK3399 example DT binding to be valid
- Use RK3399 PCIE_CLIENT_LEGACY_INT_CTRL to generate INTx instead of
manually generating PCIe message
- Use multiple windows to avoid address translation conflicts
- Use u32 (not u16) when accessing 32-bit registers
- Hide MSI-X Capability, since RK3399 can't generate MSI-X
- Set endpoint controller required alignment to 256
Synopsys DesignWare PCIe controller driver:
- Wait for link to come up only if we've initiated link training
Miscellaneous:
- Add pci_clear_master() stub for non-CONFIG_PCI"
* tag 'pci-v6.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (116 commits)
Documentation: PCI: correct spelling
PCI: vmd: Fix uninitialized variable usage in vmd_enable_domain()
PCI: xgene-msi: Convert to platform remove callback returning void
PCI: tegra: Convert to platform remove callback returning void
PCI: rockchip-host: Convert to platform remove callback returning void
PCI: mvebu: Convert to platform remove callback returning void
PCI: mt7621: Convert to platform remove callback returning void
PCI: mediatek-gen3: Convert to platform remove callback returning void
PCI: mediatek: Convert to platform remove callback returning void
PCI: iproc: Convert to platform remove callback returning void
PCI: hisi-error: Convert to platform remove callback returning void
PCI: dwc: Convert to platform remove callback returning void
PCI: j721e: Convert to platform remove callback returning void
PCI: brcmstb: Convert to platform remove callback returning void
PCI: altera-msi: Convert to platform remove callback returning void
PCI: altera: Convert to platform remove callback returning void
PCI: aardvark: Convert to platform remove callback returning void
PCI: rcar: Use correct product family name for Renesas R-Car
PCI: layerscape: Add the endpoint linkup notifier support
PCI: endpoint: pci-epf-vntb: Fix typo in comments
...
Diffstat (limited to 'drivers/pci')
53 files changed, 1514 insertions, 676 deletions
diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c index cc83a8925ce0..e70213c9060a 100644 --- a/drivers/pci/controller/cadence/pci-j721e.c +++ b/drivers/pci/controller/cadence/pci-j721e.c @@ -542,7 +542,7 @@ err_get_sync: return ret; } -static int j721e_pcie_remove(struct platform_device *pdev) +static void j721e_pcie_remove(struct platform_device *pdev) { struct j721e_pcie *pcie = platform_get_drvdata(pdev); struct cdns_pcie *cdns_pcie = pcie->cdns_pcie; @@ -552,13 +552,11 @@ static int j721e_pcie_remove(struct platform_device *pdev) cdns_pcie_disable_phy(cdns_pcie); pm_runtime_put(dev); pm_runtime_disable(dev); - - return 0; } static struct platform_driver j721e_pcie_driver = { .probe = j721e_pcie_probe, - .remove = j721e_pcie_remove, + .remove_new = j721e_pcie_remove, .driver = { .name = "j721e-pcie", .of_match_table = of_j721e_pcie_match, diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index 940c7dd701d6..5b14f7ee3c79 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -12,6 +12,8 @@ #include "pcie-cadence.h" +#define LINK_RETRAIN_TIMEOUT HZ + static u64 bar_max_size[] = { [RP_BAR0] = _ULL(128 * SZ_2G), [RP_BAR1] = SZ_2G, @@ -77,6 +79,27 @@ static struct pci_ops cdns_pcie_host_ops = { .write = pci_generic_config_write, }; +static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) +{ + u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; + unsigned long end_jiffies; + u16 lnk_stat; + + /* Wait for link training to complete. Exit after timeout. */ + end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; + do { + lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA); + if (!(lnk_stat & PCI_EXP_LNKSTA_LT)) + break; + usleep_range(0, 1000); + } while (time_before(jiffies, end_jiffies)); + + if (!(lnk_stat & PCI_EXP_LNKSTA_LT)) + return 0; + + return -ETIMEDOUT; +} + static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie) { struct device *dev = pcie->dev; @@ -118,6 +141,10 @@ static int cdns_pcie_retrain(struct cdns_pcie *pcie) cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL, lnk_ctl); + ret = cdns_pcie_host_training_complete(pcie); + if (ret) + return ret; + ret = cdns_pcie_host_wait_for_link(pcie); } return ret; diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 52906f999f2b..27aaa2a6bf39 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c @@ -80,6 +80,7 @@ struct imx6_pcie { struct clk *pcie; struct clk *pcie_aux; struct regmap *iomuxc_gpr; + u16 msi_ctrl; u32 controller_id; struct reset_control *pciephy_reset; struct reset_control *apps_reset; @@ -1178,6 +1179,26 @@ pm_turnoff_sleep: usleep_range(1000, 10000); } +static void imx6_pcie_msi_save_restore(struct imx6_pcie *imx6_pcie, bool save) +{ + u8 offset; + u16 val; + struct dw_pcie *pci = imx6_pcie->pci; + + if (pci_msi_enabled()) { + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); + if (save) { + val = dw_pcie_readw_dbi(pci, offset + PCI_MSI_FLAGS); + imx6_pcie->msi_ctrl = val; + } else { + dw_pcie_dbi_ro_wr_en(pci); + val = imx6_pcie->msi_ctrl; + dw_pcie_writew_dbi(pci, offset + PCI_MSI_FLAGS, val); + dw_pcie_dbi_ro_wr_dis(pci); + } + } +} + static int imx6_pcie_suspend_noirq(struct device *dev) { struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); @@ -1186,6 +1207,7 @@ static int imx6_pcie_suspend_noirq(struct device *dev) if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND)) return 0; + imx6_pcie_msi_save_restore(imx6_pcie, true); imx6_pcie_pm_turnoff(imx6_pcie); imx6_pcie_stop_link(imx6_pcie->pci); imx6_pcie_host_exit(pp); @@ -1205,6 +1227,7 @@ static int imx6_pcie_resume_noirq(struct device *dev) ret = imx6_pcie_host_init(pp); if (ret) return ret; + imx6_pcie_msi_save_restore(imx6_pcie, false); dw_pcie_setup_rc(pp); if (imx6_pcie->link_is_up) diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c index c640db60edc6..de4c1758a6c3 100644 --- a/drivers/pci/controller/dwc/pci-layerscape-ep.c +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c @@ -18,6 +18,20 @@ #include "pcie-designware.h" +#define PEX_PF0_CONFIG 0xC0014 +#define PEX_PF0_CFG_READY BIT(0) + +/* PEX PFa PCIE PME and message interrupt registers*/ +#define PEX_PF0_PME_MES_DR 0xC0020 +#define PEX_PF0_PME_MES_DR_LUD BIT(7) +#define PEX_PF0_PME_MES_DR_LDD BIT(9) +#define PEX_PF0_PME_MES_DR_HRD BIT(10) + +#define PEX_PF0_PME_MES_IER 0xC0028 +#define PEX_PF0_PME_MES_IER_LUDIE BIT(7) +#define PEX_PF0_PME_MES_IER_LDDIE BIT(9) +#define PEX_PF0_PME_MES_IER_HRDIE BIT(10) + #define to_ls_pcie_ep(x) dev_get_drvdata((x)->dev) struct ls_pcie_ep_drvdata { @@ -30,8 +44,84 @@ struct ls_pcie_ep { struct dw_pcie *pci; struct pci_epc_features *ls_epc; const struct ls_pcie_ep_drvdata *drvdata; + int irq; + bool big_endian; }; +static u32 ls_lut_readl(struct ls_pcie_ep *pcie, u32 offset) +{ + struct dw_pcie *pci = pcie->pci; + + if (pcie->big_endian) + return ioread32be(pci->dbi_base + offset); + else + return ioread32(pci->dbi_base + offset); +} + +static void ls_lut_writel(struct ls_pcie_ep *pcie, u32 offset, u32 value) +{ + struct dw_pcie *pci = pcie->pci; + + if (pcie->big_endian) + iowrite32be(value, pci->dbi_base + offset); + else + iowrite32(value, pci->dbi_base + offset); +} + +static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id) +{ + struct ls_pcie_ep *pcie = dev_id; + struct dw_pcie *pci = pcie->pci; + u32 val, cfg; + + val = ls_lut_readl(pcie, PEX_PF0_PME_MES_DR); + ls_lut_writel(pcie, PEX_PF0_PME_MES_DR, val); + + if (!val) + return IRQ_NONE; + + if (val & PEX_PF0_PME_MES_DR_LUD) { + cfg = ls_lut_readl(pcie, PEX_PF0_CONFIG); + cfg |= PEX_PF0_CFG_READY; + ls_lut_writel(pcie, PEX_PF0_CONFIG, cfg); + dw_pcie_ep_linkup(&pci->ep); + + dev_dbg(pci->dev, "Link up\n"); + } else if (val & PEX_PF0_PME_MES_DR_LDD) { + dev_dbg(pci->dev, "Link down\n"); + } else if (val & PEX_PF0_PME_MES_DR_HRD) { + dev_dbg(pci->dev, "Hot reset\n"); + } + + return IRQ_HANDLED; +} + +static int ls_pcie_ep_interrupt_init(struct ls_pcie_ep *pcie, + struct platform_device *pdev) +{ + u32 val; + int ret; + + pcie->irq = platform_get_irq_byname(pdev, "pme"); + if (pcie->irq < 0) + return pcie->irq; + + ret = devm_request_irq(&pdev->dev, pcie->irq, ls_pcie_ep_event_handler, + IRQF_SHARED, pdev->name, pcie); + if (ret) { + dev_err(&pdev->dev, "Can't register PCIe IRQ\n"); + return ret; + } + + /* Enable interrupts */ + val = ls_lut_readl(pcie, PEX_PF0_PME_MES_IER); + val |= PEX_PF0_PME_MES_IER_LDDIE | PEX_PF0_PME_MES_IER_HRDIE | + PEX_PF0_PME_MES_IER_LUDIE; + ls_lut_writel(pcie, PEX_PF0_PME_MES_IER, val); + + return 0; +} + static const struct pci_epc_features* ls_pcie_ep_get_features(struct dw_pcie_ep *ep) { @@ -125,6 +215,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev) struct ls_pcie_ep *pcie; struct pci_epc_features *ls_epc; struct resource *dbi_base; + int ret; pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); if (!pcie) @@ -144,6 +235,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev) pci->ops = pcie->drvdata->dw_pcie_ops; ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4); + ls_epc->linkup_notifier = true; pcie->pci = pci; pcie->ls_epc = ls_epc; @@ -155,9 +247,15 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev) pci->ep.ops = &ls_pcie_ep_ops; + pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian"); + platform_set_drvdata(pdev, pcie); - return dw_pcie_ep_init(&pci->ep); + ret = dw_pcie_ep_init(&pci->ep); + if (ret) + return ret; + + return ls_pcie_ep_interrupt_init(pcie, pdev); } static struct platform_driver ls_pcie_ep_driver = { diff --git a/drivers/pci/controller/dwc/pcie-bt1.c b/drivers/pci/controller/dwc/pcie-bt1.c index 95a723a6fd46..17e696797ff5 100644 --- a/drivers/pci/controller/dwc/pcie-bt1.c +++ b/drivers/pci/controller/dwc/pcie-bt1.c @@ -617,13 +617,11 @@ static int bt1_pcie_probe(struct platform_device *pdev) return bt1_pcie_add_port(btpci); } -static int bt1_pcie_remove(struct platform_device *pdev) +static void bt1_pcie_remove(struct platform_device *pdev) { struct bt1_pcie *btpci = platform_get_drvdata(pdev); bt1_pcie_del_port(btpci); - - return 0; } static const struct of_device_id bt1_pcie_of_match[] = { @@ -634,7 +632,7 @@ MODULE_DEVICE_TABLE(of, bt1_pcie_of_match); static struct platform_driver bt1_pcie_driver = { .probe = bt1_pcie_probe, - .remove = bt1_pcie_remove, + .remove_new = bt1_pcie_remove, .driver = { .name = "bt1-pcie", .of_match_table = bt1_pcie_of_match, diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 9952057c8819..cf61733bf78d 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -485,14 +485,19 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp) if (ret) goto err_remove_edma; - if (!dw_pcie_link_up(pci)) { + if (dw_pcie_link_up(pci)) { + dw_pcie_print_link_status(pci); + } else { ret = dw_pcie_start_link(pci); if (ret) goto err_remove_edma; - } - /* Ignore errors, the link may come up later */ - dw_pcie_wait_for_link(pci); + if (pci->ops && pci->ops->start_link) { + ret = dw_pcie_wait_for_link(pci); + if (ret) + goto err_stop_link; + } + } bridge->sysdata = pp; diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 8e33e6e59e68..df092229e97d 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -644,9 +644,20 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index) dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0); } -int dw_pcie_wait_for_link(struct dw_pcie *pci) +void dw_pcie_print_link_status(struct dw_pcie *pci) { u32 offset, val; + + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); + val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); + + dev_info(pci->dev, "PCIe Gen.%u x%u link up\n", + FIELD_GET(PCI_EXP_LNKSTA_CLS, val), + FIELD_GET(PCI_EXP_LNKSTA_NLW, val)); +} + +int dw_pcie_wait_for_link(struct dw_pcie *pci) +{ int retries; /* Check if the link is up or not */ @@ -662,12 +673,7 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci) return -ETIMEDOUT; } - offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); - val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); - - dev_info(pci->dev, "PCIe Gen.%u x%u link up\n", - FIELD_GET(PCI_EXP_LNKSTA_CLS, val), - FIELD_GET(PCI_EXP_LNKSTA_NLW, val)); + dw_pcie_print_link_status(pci); return 0; } diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index 79713ce075cc..615660640801 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -429,6 +429,7 @@ void dw_pcie_setup(struct dw_pcie *pci); void dw_pcie_iatu_detect(struct dw_pcie *pci); int dw_pcie_edma_detect(struct dw_pcie *pci); void dw_pcie_edma_remove(struct dw_pcie *pci); +void dw_pcie_print_link_status(struct dw_pcie *pci); static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) { diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c index 927ae05dc920..fd484cc7c481 100644 --- a/drivers/pci/controller/dwc/pcie-histb.c +++ b/drivers/pci/controller/dwc/pcie-histb.c @@ -421,7 +421,7 @@ static int histb_pcie_probe(struct platform_device *pdev) return 0; } -static int histb_pcie_remove(struct platform_device *pdev) +static void histb_pcie_remove(struct platform_device *pdev) { struct histb_pcie *hipcie = platform_get_drvdata(pdev); @@ -429,8 +429,6 @@ static int histb_pcie_remove(struct platform_device *pdev) if (hipcie->phy) phy_exit(hipcie->phy); - - return 0; } static const struct of_device_id histb_pcie_of_match[] = { @@ -441,7 +439,7 @@ MODULE_DEVICE_TABLE(of, histb_pcie_of_match); static struct platform_driver histb_pcie_platform_driver = { .probe = histb_pcie_probe, - .remove = histb_pcie_remove, + .remove_new = histb_pcie_remove, .driver = { .name = "histb-pcie", .of_match_table = histb_pcie_of_match, diff --git a/drivers/pci/controller/dwc/pcie-intel-gw.c b/drivers/pci/controller/dwc/pcie-intel-gw.c index 333c33d98a70..9c7caed9e706 100644 --- a/drivers/pci/controller/dwc/pcie-intel-gw.c +++ b/drivers/pci/controller/dwc/pcie-intel-gw.c @@ -340,15 +340,13 @@ static void __intel_pcie_remove(struct intel_pcie *pcie) phy_exit(pcie->phy); } -static int intel_pcie_remove(struct platform_device *pdev) +static void intel_pcie_remove(struct platform_device *pdev) { struct intel_pcie *pcie = platform_get_drvdata(pdev); struct dw_pcie_rp *pp = &pcie->pci.pp; dw_pcie_host_deinit(pp); __intel_pcie_remove(pcie); - - return 0; } static int intel_pcie_suspend_noirq(struct device *dev) @@ -443,7 +441,7 @@ static const struct of_device_id of_intel_pcie_match[] = { static struct platform_driver intel_pcie_driver = { .probe = intel_pcie_probe, - .remove = intel_pcie_remove, + .remove_new = intel_pcie_remove, .driver = { .name = "intel-gw-pcie", .of_match_table = of_intel_pcie_match, diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c index 19b32839ea26..0fe7f06f2102 100644 --- a/drivers/pci/controller/dwc/pcie-qcom-ep.c +++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c @@ -569,9 +569,11 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data) if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) { dev_dbg(dev, "Received Linkdown event\n"); pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN; + pci_epc_linkdown(pci->ep.epc); } else if (FIELD_GET(PARF_INT_ALL_BME, status)) { dev_dbg(dev, "Received BME event. Link is enabled!\n"); pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED; + pci_epc_bme_notify(pci->ep.epc); } else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) { dev_dbg(dev, "Received PM Turn-off event! Entering L23\n"); val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); @@ -784,7 +786,7 @@ err_disable_resources: return ret; } -static int qcom_pcie_ep_remove(struct platform_device *pdev) +static void qcom_pcie_ep_remove(struct platform_device *pdev) { struct qcom_pcie_ep *pcie_ep = platform_get_drvdata(pdev); @@ -794,11 +796,9 @@ static int qcom_pcie_ep_remove(struct platform_device *pdev) debugfs_remove_recursive(pcie_ep->debugfs); if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) - return 0; + return; qcom_pcie_disable_resources(pcie_ep); - - return 0; } static const struct of_device_id qcom_pcie_ep_match[] = { @@ -810,7 +810,7 @@ MODULE_DEVICE_TABLE(of, qcom_pcie_ep_match); static struct platform_driver qcom_pcie_ep_driver = { .probe = qcom_pcie_ep_probe, - .remove = qcom_pcie_ep_remove, + .remove_new = qcom_pcie_ep_remove, .driver = { .name = "qcom-pcie-ep", .of_match_table = qcom_pcie_ep_match, diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 4ab30892f6ef..7a87a47eb7ed 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -61,7 +61,6 @@ /* DBI registers */ #define AXI_MSTR_RESP_COMP_CTRL0 0x818 #define AXI_MSTR_RESP_COMP_CTRL1 0x81c -#define MISC_CONTROL_1_REG 0x8bc /* MHI registers */ #define PARF_DEBUG_CNT_PM_LINKST_IN_L2 0xc04 @@ -132,9 +131,6 @@ /* AXI_MSTR_RESP_COMP_CTRL1 register fields */ #define CFG_BRIDGE_SB_INIT BIT(0) -/* MISC_CONTROL_1_REG register fields */ -#define DBI_RO_WR_EN 1 - /* PCI_EXP_SLTCAP register fields */ #define PCIE_CAP_SLOT_POWER_LIMIT_VAL FIELD_PREP(PCI_EXP_SLTCAP_SPLV, 250) #define PCIE_CAP_SLOT_POWER_LIMIT_SCALE FIELD_PREP(PCI_EXP_SLTCAP_SPLS, 1) @@ -144,7 +140,6 @@ PCI_EXP_SLTCAP_AIP | \ PCI_EXP_SLTCAP_PIP | \ PCI_EXP_SLTCAP_HPS | \ - PCI_EXP_SLTCAP_HPC | \ PCI_EXP_SLTCAP_EIP | \ PCIE_CAP_SLOT_POWER_LIMIT_VAL | \ PCIE_CAP_SLOT_POWER_LIMIT_SCALE) @@ -274,6 +269,20 @@ static int qcom_pcie_start_link(struct dw_pcie *pci) return 0; } +static void qcom_pcie_clear_hpc(struct dw_pcie *pci) +{ + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); + u32 val; + + dw_pcie_dbi_ro_wr_en(pci); + + val = readl(pci->dbi_base + offset + PCI_EXP_SLTCAP); + val &= ~PCI_EXP_SLTCAP_HPC; + writel(val, pci->dbi_base + offset + PCI_EXP_SLTCAP); + + dw_pcie_dbi_ro_wr_dis(pci); +} + static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) { u32 val; @@ -429,6 +438,8 @@ static int qcom_pcie_post_init_2_1_0(struct qcom_pcie *pcie) writel(CFG_BRIDGE_SB_INIT, pci->dbi_base + AXI_MSTR_RESP_COMP_CTRL1); + qcom_pcie_clear_hpc(pcie->pci); + return 0; } @@ -512,6 +523,8 @@ static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie) writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT); } + qcom_pcie_clear_hpc(pcie->pci); + return 0; } @@ -607,6 +620,8 @@ static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie) val |= EN; writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); + qcom_pcie_clear_hpc(pcie->pci); + return 0; } @@ -692,34 +707,6 @@ static int qcom_pcie_init_2_4_0(struct qcom_pcie *pcie) return 0; } -static int qcom_pcie_post_init_2_4_0(struct qcom_pcie *pcie) -{ - u32 val; - - /* enable PCIe clocks and resets */ - val = readl(pcie->parf + PARF_PHY_CTRL); - val &= ~PHY_TEST_PWR_DOWN; - writel(val, pcie->parf + PARF_PHY_CTRL); - - /* change DBI base address */ - writel(0, pcie->parf + PARF_DBI_BASE_ADDR); - - /* MAC PHY_POWERDOWN MUX DISABLE */ - val = readl(pcie->parf + PARF_SYS_CTRL); - val &= ~MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN; - writel(val, pcie->parf + PARF_SYS_CTRL); - - val = readl(pcie->parf + PARF_MHI_CLOCK_RESET_CTRL); - val |= BYPASS; - writel(val, pcie->parf + PARF_MHI_CLOCK_RESET_CTRL); - - val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); - val |= EN; - writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); - - return 0; -} - static int qcom_pcie_get_resources_2_3_3(struct qcom_pcie *pcie) { struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; @@ -826,7 +813,9 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie) writel(0, pcie->parf + PARF_Q2A_FLUSH); writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND); - writel(DBI_RO_WR_EN, pci->dbi_base + MISC_CONTROL_1_REG); + + dw_pcie_dbi_ro_wr_en(pci); + writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); @@ -836,6 +825,8 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie) writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + PCI_EXP_DEVCTL2); + dw_pcie_dbi_ro_wr_dis(pci); + return 0; } @@ -966,6 +957,13 @@ err_disable_regulators: return ret; } +static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) +{ + qcom_pcie_clear_hpc(pcie->pci); + + return 0; +} + static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) { struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; @@ -1136,6 +1134,7 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie) writel(0, pcie->parf + PARF_Q2A_FLUSH); dw_pcie_dbi_ro_wr_en(pci); + writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); @@ -1145,6 +1144,8 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie) writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + PCI_EXP_DEVCTL2); + dw_pcie_dbi_ro_wr_dis(pci); + for (i = 0; i < 256; i++) writel(0, pcie->parf + PARF_BDF_TO_SID_TABLE_N + (4 * i)); @@ -1251,7 +1252,7 @@ static const struct qcom_pcie_ops ops_2_3_2 = { static const struct qcom_pcie_ops ops_2_4_0 = { .get_resources = qcom_pcie_get_resources_2_4_0, .init = qcom_pcie_init_2_4_0, - .post_init = qcom_pcie_post_init_2_4_0, + .post_init = qcom_pcie_post_init_2_3_2, .deinit = qcom_pcie_deinit_2_4_0, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, }; @@ -1269,6 +1270,7 @@ static const struct qcom_pcie_ops ops_2_3_3 = { static const struct qcom_pcie_ops ops_2_7_0 = { .get_resources = qcom_pcie_get_resources_2_7_0, .init = qcom_pcie_init_2_7_0, + .post_init = qcom_pcie_post_init_2_7_0, .deinit = qcom_pcie_deinit_2_7_0, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, }; @@ -1277,6 +1279,7 @@ static const struct qcom_pcie_ops ops_2_7_0 = { static const struct qcom_pcie_ops ops_1_9_0 = { .get_resources = qcom_pcie_get_resources_2_7_0, .init = qcom_pcie_init_2_7_0, + .post_init = qcom_pcie_post_init_2_7_0, .deinit = qcom_pcie_deinit_2_7_0, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, .config_sid = qcom_pcie_config_sid_1_9_0, diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c index 4fdadc7b045f..e1db909f53ec 100644 --- a/drivers/pci/controller/dwc/pcie-tegra194.c +++ b/drivers/pci/controller/dwc/pcie-tegra194.c @@ -2296,13 +2296,13 @@ fail: return ret; } -static int tegra_pcie_dw_remove(struct platform_device *pdev) +static void tegra_pcie_dw_remove(struct platform_device *pdev) { struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); if (pcie->of_data->mode == DW_PCIE_RC_TYPE) { if (!pcie->link_state) - return 0; + return; debugfs_remove_recursive(pcie->debugfs); tegra_pcie_deinit_controller(pcie); @@ -2316,8 +2316,6 @@ static int tegra_pcie_dw_remove(struct platform_device *pdev) tegra_bpmp_put(pcie->bpmp); if (pcie->pex_refclk_sel_gpiod) gpiod_set_value(pcie->pex_refclk_sel_gpiod, 0); - - return 0; } static int tegra_pcie_dw_suspend_late(struct device *dev) @@ -2511,7 +2509,7 @@ static const struct dev_pm_ops tegra_pcie_dw_pm_ops = { static struct platform_driver tegra_pcie_dw_driver = { .probe = tegra_pcie_dw_probe, - .remove = tegra_pcie_dw_remove, + .remove_new = tegra_pcie_dw_remove, .shutdown = tegra_pcie_dw_shutdown, .driver = { .name = "tegra194-pcie", diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index 513d8edf3a5c..71ecd7ddcc8a 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c @@ -1927,7 +1927,7 @@ static int advk_pcie_probe(struct platform_device *pdev) return 0; } -static int advk_pcie_remove(struct platform_device *pdev) +static void advk_pcie_remove(struct platform_device *pdev) { struct advk_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); @@ -1989,8 +1989,6 @@ static int advk_pcie_remove(struct platform_device *pdev) /* Disable phy */ advk_pcie_disable_phy(pcie); - - return 0; } static const struct of_device_id advk_pcie_of_match_table[] = { @@ -2005,7 +2003,7 @@ static struct platform_driver advk_pcie_driver = { .of_match_table = advk_pcie_of_match_table, }, .probe = advk_pcie_probe, - .remove = advk_pcie_remove, + .remove_new = advk_pcie_remove, }; module_platform_driver(advk_pcie_driver); diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c index ecd3009df586..6e7981d2ed5e 100644 --- a/drivers/pci/controller/pci-ftpci100.c +++ b/drivers/pci/controller/pci-ftpci100.c @@ -429,22 +429,12 @@ static int faraday_pci_probe(struct platform_device *pdev) p->dev = dev; /* Retrieve and enable optional clocks */ - clk = devm_clk_get(dev, "PCLK"); + clk = devm_clk_get_enabled(dev, "PCLK"); if (IS_ERR(clk)) return PTR_ERR(clk); - ret = clk_prepare_enable(clk); - if (ret) { - dev_err(dev, "could not prepare PCLK\n"); - return ret; - } - p->bus_clk = devm_clk_get(dev, "PCICLK"); + p->bus_clk = devm_clk_get_enabled(dev, "PCICLK"); if (IS_ERR(p->bus_clk)) return PTR_ERR(p->bus_clk); - ret = clk_prepare_enable(p->bus_clk); - if (ret) { - dev_err(dev, "could not prepare PCICLK\n"); - return ret; - } p->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(p->base)) diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c index 1dc209f6f53a..c931b1b07b1d 100644 --- a/drivers/pci/controller/pci-mvebu.c +++ b/drivers/pci/controller/pci-mvebu.c @@ -1649,7 +1649,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev) return pci_host_probe(bridge); } -static int mvebu_pcie_remove(struct platform_device *pdev) +static void mvebu_pcie_remove(struct platform_device *pdev) { struct mvebu_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); @@ -1707,8 +1707,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev) /* Power down card and disable clocks. Must be the last step. */ mvebu_pcie_powerdown(port); } - - return 0; } static const struct of_device_id mvebu_pcie_of_match_table[] = { @@ -1730,7 +1728,7 @@ static struct platform_driver mvebu_pcie_driver = { .pm = &mvebu_pcie_pm_ops, }, .probe = mvebu_pcie_probe, - .remove = mvebu_pcie_remove, + .remove_new = mvebu_pcie_remove, }; module_platform_driver(mvebu_pcie_driver); diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c index 79630885b9c8..038d974a318e 100644 --- a/drivers/pci/controller/pci-tegra.c +++ b/drivers/pci/controller/pci-tegra.c @@ -2680,7 +2680,7 @@ put_resources: return err; } -static int tegra_pcie_remove(struct platform_device *pdev) +static void tegra_pcie_remove(struct platform_device *pdev) { struct tegra_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); @@ -2701,8 +2701,6 @@ static int tegra_pcie_remove(struct platform_device *pdev) list_for_each_entry_safe(port, tmp, &pcie->ports, list) tegra_pcie_port_free(port); - - return 0; } static int tegra_pcie_pm_suspend(struct device *dev) @@ -2808,6 +2806,6 @@ static struct platform_driver tegra_pcie_driver = { .pm = &tegra_pcie_pm_ops, }, .probe = tegra_pcie_probe, - .remove = tegra_pcie_remove, + .remove_new = tegra_pcie_remove, }; module_platform_driver(tegra_pcie_driver); diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c index d7987b281f79..0234e528b9a5 100644 --- a/drivers/pci/controller/pci-xgene-msi.c +++ b/drivers/pci/controller/pci-xgene-msi.c @@ -348,7 +348,7 @@ static void xgene_msi_isr(struct irq_desc *desc) static enum cpuhp_state pci_xgene_online; -static int xgene_msi_remove(struct platform_device *pdev) +static void xgene_msi_remove(struct platform_device *pdev) { struct xgene_msi *msi = platform_get_drvdata(pdev); @@ -362,8 +362,6 @@ static int xgene_msi_remove(struct platform_device *pdev) msi->bitmap = NULL; xgene_free_domains(msi); - - return 0; } static int xgene_msi_hwirq_alloc(unsigned int cpu) @@ -521,7 +519,7 @@ static struct platform_driver xgene_msi_driver = { .of_match_table = xgene_msi_match_table, }, .probe = xgene_msi_probe, - .remove = xgene_msi_remove, + .remove_new = xgene_msi_remove, }; static int __init xgene_pcie_msi_init(void) diff --git a/drivers/pci/controller/pcie-altera-msi.c b/drivers/pci/controller/pcie-altera-msi.c index 65e8a20cc442..6ad5427490b5 100644 --- a/drivers/pci/controller/pcie-altera-msi.c +++ b/drivers/pci/controller/pcie-altera-msi.c @@ -197,7 +197,7 @@ static void altera_free_domains(struct altera_msi *msi) irq_domain_remove(msi->inner_domain); } -static int altera_msi_remove(struct platform_device *pdev) +static void altera_msi_remove(struct platform_device *pdev) { struct altera_msi *msi = platform_get_drvdata(pdev); @@ -207,7 +207,6 @@ static int altera_msi_remove(struct platform_device *pdev) altera_free_domains(msi); platform_set_drvdata(pdev, NULL); - return 0; } static int altera_msi_probe(struct platform_device *pdev) @@ -275,7 +274,7 @@ static struct platform_driver altera_msi_driver = { .of_match_table = altera_msi_of_match, }, .probe = altera_msi_probe, - .remove = altera_msi_remove, + .remove_new = altera_msi_remove, }; static int __init altera_msi_init(void) diff --git a/drivers/pci/controller/pcie-altera.c b/drivers/pci/controller/pcie-altera.c index 18b2361d6462..c95a29fff8bf 100644 --- a/drivers/pci/controller/pcie-altera.c +++ b/drivers/pci/controller/pcie-altera.c @@ -806,7 +806,7 @@ static int altera_pcie_probe(struct platform_device *pdev) return pci_host_probe(bridge); } -static int altera_pcie_remove(struct platform_device *pdev) +static void altera_pcie_remove(struct platform_device *pdev) { struct altera_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); @@ -814,13 +814,11 @@ static int altera_pcie_remove(struct platform_device *pdev) pci_stop_root_bus(bridge->bus); pci_remove_root_bus(bridge->bus); altera_pcie_irq_teardown(pcie); - - return 0; } static struct platform_driver altera_pcie_driver = { .probe = altera_pcie_probe, - .remove = altera_pcie_remove, + .remove_new = altera_pcie_remove, .driver = { .name = "altera-pcie", .of_match_table = altera_pcie_of_match, diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c index edf283e2b5dd..f593a422bd63 100644 --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c @@ -1396,7 +1396,7 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie) clk_disable_unprepare(pcie->clk); } -static int brcm_pcie_remove(struct platform_device *pdev) +static void brcm_pcie_remove(struct platform_device *pdev) { struct brcm_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); @@ -1404,8 +1404,6 @@ static int brcm_pcie_remove(struct platform_device *pdev) pci_stop_root_bus(bridge->bus); pci_remove_root_bus(bridge->bus); __brcm_pcie_remove(pcie); - - return 0; } static const int pcie_offsets[] = { @@ -1612,7 +1610,7 @@ static const struct dev_pm_ops brcm_pcie_pm_ops = { static struct platform_driver brcm_pcie_driver = { .probe = brcm_pcie_probe, - .remove = brcm_pcie_remove, + .remove_new = brcm_pcie_remove, .driver = { .name = "brcm-pcie", .of_match_table = brcm_pcie_match, diff --git a/drivers/pci/controller/pcie-hisi-error.c b/drivers/pci/controller/pcie-hisi-error.c index 7d88eb696b06..ad9d5ffcd9e3 100644 --- a/drivers/pci/controller/pcie-hisi-error.c +++ b/drivers/pci/controller/pcie-hisi-error.c @@ -299,13 +299,11 @@ static int hisi_pcie_error_handler_probe(struct platform_device *pdev) return 0; } -static int hisi_pcie_error_handler_remove(struct platform_device *pdev) +static void hisi_pcie_error_handler_remove(struct platform_device *pdev) { struct hisi_pcie_error_private *priv = platform_get_drvdata(pdev); ghes_unregister_vendor_record_notifier(&priv->nb); - - return 0; } static const struct acpi_device_id hisi_pcie_acpi_match[] = { @@ -319,7 +317,7 @@ static struct platform_driver hisi_pcie_error_handler_driver = { .acpi_match_table = hisi_pcie_acpi_match, }, .probe = hisi_pcie_error_handler_probe, - .remove = hisi_pcie_error_handler_remove, + .remove_new = hisi_pcie_error_handler_remove, }; module_platform_driver(hisi_pcie_error_handler_driver); diff --git a/drivers/pci/controller/pcie-iproc-platform.c b/drivers/pci/controller/pcie-iproc-platform.c index 4142a73e611d..acdc583d2980 100644 --- a/drivers/pci/controller/pcie-iproc-platform.c +++ b/drivers/pci/controller/pcie-iproc-platform.c @@ -114,11 +114,11 @@ static int iproc_pltfm_pcie_probe(struct platform_device *pdev) return 0; } -static int iproc_pltfm_pcie_remove(struct platform_device *pdev) +static void iproc_pltfm_pcie_remove(struct platform_device *pdev) { struct iproc_pcie *pcie = platform_get_drvdata(pdev); - return iproc_pcie_remove(pcie); + iproc_pcie_remove(pcie); } static void iproc_pltfm_pcie_shutdown(struct platform_device *pdev) @@ -134,7 +134,7 @@ static struct platform_driver iproc_pltfm_pcie_driver = { .of_match_table = of_match_ptr(iproc_pcie_of_match_table), }, .probe = iproc_pltfm_pcie_probe, - .remove = iproc_pltfm_pcie_remove, + .remove_new = iproc_pltfm_pcie_remove, .shutdown = iproc_pltfm_pcie_shutdown, }; module_platform_driver(iproc_pltfm_pcie_driver); diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c index 83029bdfd884..bd1c98b68851 100644 --- a/drivers/pci/controller/pcie-iproc.c +++ b/drivers/pci/controller/pcie-iproc.c @@ -1537,7 +1537,7 @@ err_exit_phy: } EXPORT_SYMBOL(iproc_pcie_setup); -int iproc_pcie_remove(struct iproc_pcie *pcie) +void iproc_pcie_remove(struct iproc_pcie *pcie) { struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); @@ -1548,8 +1548,6 @@ int iproc_pcie_remove(struct iproc_pcie *pcie) phy_power_off(pcie->phy); phy_exit(pcie->phy); - - return 0; } EXPORT_SYMBOL(iproc_pcie_remove); diff --git a/drivers/pci/controller/pcie-iproc.h b/drivers/pci/controller/pcie-iproc.h index dcca315897c8..969ded03b8c2 100644 --- a/drivers/pci/controller/pcie-iproc.h +++ b/drivers/pci/controller/pcie-iproc.h @@ -111,7 +111,7 @@ struct iproc_pcie { }; int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res); -int iproc_pcie_remove(struct iproc_pcie *pcie); +void iproc_pcie_remove(struct iproc_pcie *pcie); int iproc_pcie_shutdown(struct iproc_pcie *pcie); #ifdef CONFIG_PCIE_IPROC_MSI diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c index b8612ce5f4d0..e0e27645fdf4 100644 --- a/drivers/pci/controller/pcie-mediatek-gen3.c +++ b/drivers/pci/controller/pcie-mediatek-gen3.c @@ -943,7 +943,7 @@ static int mtk_pcie_probe(struct platform_device *pdev) return 0; } -static int mtk_pcie_remove(struct platform_device *pdev) +static void mtk_pcie_remove(struct platform_device *pdev) { struct mtk_gen3_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); @@ -955,8 +955,6 @@ static int mtk_pcie_remove(struct platform_device *pdev) mtk_pcie_irq_teardown(pcie); mtk_pcie_power_down(pcie); - - return 0; } static void mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie) @@ -1069,7 +1067,7 @@ MODULE_DEVICE_TABLE(of, mtk_pcie_of_match); static struct platform_driver mtk_pcie_driver = { .probe = mtk_pcie_probe, - .remove = mtk_pcie_remove, + .remove_new = mtk_pcie_remove, .driver = { .name = "mtk-pcie-gen3", .of_match_table = mtk_pcie_of_match, diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c index 31de7a29192c..66a8f73296fc 100644 --- a/drivers/pci/controller/pcie-mediatek.c +++ b/drivers/pci/controller/pcie-mediatek.c @@ -1134,7 +1134,7 @@ static void mtk_pcie_free_resources(struct mtk_pcie *pcie) pci_free_resource_list(windows); } -static int mtk_pcie_remove(struct platform_device *pdev) +static void mtk_pcie_remove(struct platform_device *pdev) { struct mtk_pcie *pcie = platform_get_drvdata(pdev); struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); @@ -1146,8 +1146,6 @@ static int mtk_pcie_remove(struct platform_device *pdev) mtk_pcie_irq_teardown(pcie); mtk_pcie_put_resources(pcie); - - return 0; } static int mtk_pcie_suspend_noirq(struct device *dev) @@ -1239,7 +1237,7 @@ MODULE_DEVICE_TABLE(of, mtk_pcie_ids); static struct platform_driver mtk_pcie_driver = { .probe = mtk_pcie_probe, - .remove = mtk_pcie_remove, + .remove_new = mtk_pcie_remove, .driver = { .name = "mtk-pcie", .of_match_table = mtk_pcie_ids, diff --git a/drivers/pci/controller/pcie-mt7621.c b/drivers/pci/controller/pcie-mt7621.c index a445ec314375..79e225edb42a 100644 --- a/drivers/pci/controller/pcie-mt7621.c +++ b/drivers/pci/controller/pcie-mt7621.c @@ -524,15 +524,13 @@ remove_resets: return err; } -static int mt7621_pcie_remove(struct platform_device *pdev) +static void mt7621_pcie_remove(struct platform_device *pdev) { struct mt7621_pcie *pcie = platform_get_drvdata(pdev); struct mt7621_pcie_port *port; list_for_each_entry(port, &pcie->ports, list) reset_control_put(port->pcie_rst); - - return 0; } static const struct of_device_id mt7621_pcie_ids[] = { @@ -543,7 +541,7 @@ MODULE_DEVICE_TABLE(of, mt7621_pcie_ids); static struct platform_driver mt7621_pcie_driver = { .probe = mt7621_pcie_probe, - .remove = mt7621_pcie_remove, + .remove_new = mt7621_pcie_remove, .driver = { .name = "mt7621-pci", .of_match_table = mt7621_pcie_ids, diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c index e80e56b2a842..88975e40ee2f 100644 --- a/drivers/pci/controller/pcie-rcar-host.c +++ b/drivers/pci/controller/pcie-rcar-host.c @@ -41,21 +41,6 @@ struct rcar_msi { int irq2; }; -#ifdef CONFIG_ARM -/* - * Here we keep a static copy of the remapped PCIe controller address. - * This is only used on aarch32 systems, all of which have one single - * PCIe controller, to provide quick access to the PCIe controller in - * the L1 link state fixup function, called from the ARM fault handler. - */ -static void __iomem *pcie_base; -/* - * Static copy of PCIe device pointer, so we can check whether the - * device is runtime suspended or not. - */ -static struct device *pcie_dev; -#endif - /* Structure representing the PCIe interface */ struct rcar_pcie_host { struct rcar_pcie pcie; @@ -684,7 +669,7 @@ static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) } static struct irq_chip rcar_msi_bottom_chip = { - .name = "Rcar MSI", + .name = "R-Car MSI", .irq_ack = rcar_msi_irq_ack, .irq_mask = rcar_msi_irq_mask, .irq_unmask = rcar_msi_irq_unmask, @@ -813,7 +798,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host) /* * Setup MSI data target using RC base address address, which - * is guaranteed to be in the low 32bit range on any RCar HW. + * is guaranteed to be in the low 32bit range on any R-Car HW. */ rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR); rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR); @@ -879,12 +864,6 @@ static int rcar_pcie_get_resources(struct rcar_pcie_host *host) } host->msi.irq2 = i; -#ifdef CONFIG_ARM - /* Cache static copy for L1 link state fixup hook on aarch32 */ - pcie_base = pcie->base; - pcie_dev = pcie->dev; -#endif - return 0; err_irq2: diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c index d1a200b93b2b..0af0e965fb57 100644 --- a/drivers/pci/controller/pcie-rockchip-ep.c +++ b/drivers/pci/controller/pcie-rockchip-ep.c @@ -61,70 +61,38 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip, ROCKCHIP_PCIE_AT_OB_REGION_DESC0(region)); rockchip_pcie_write(rockchip, 0, ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region)); - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(region)); - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(region)); } static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn, - u32 r, u32 type, u64 cpu_addr, - u64 pci_addr, size_t size) + u32 r, u64 cpu_addr, u64 pci_addr, + size_t size) { - u64 sz = 1ULL << fls64(size - 1); - int num_pass_bits = ilog2(sz); - u32 addr0, addr1, desc0, desc1; - bool is_nor_msg = (type == AXI_WRAPPER_NOR_MSG); + int num_pass_bits = fls64(size - 1); + u32 addr0, addr1, desc0; - /* The minimal region size is 1MB */ if (num_pass_bits < 8) num_pass_bits = 8; - cpu_addr -= rockchip->mem_res->start; - addr0 = ((is_nor_msg ? 0x10 : (num_pass_bits - 1)) & - PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | - (lower_32_bits(cpu_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); - addr1 = upper_32_bits(is_nor_msg ? cpu_addr : pci_addr); - desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | type; - desc1 = 0; - - if (is_nor_msg) { - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); - rockchip_pcie_write(rockchip, desc0, - ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); - rockchip_pcie_write(rockchip, desc1, - ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); - } else { - /* PCI bus address region */ - rockchip_pcie_write(rockchip, addr0, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); - rockchip_pcie_write(rockchip, addr1, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); - rockchip_pcie_write(rockchip, desc0, - ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); - rockchip_pcie_write(rockchip, desc1, - ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); - - addr0 = - ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | - (lower_32_bits(cpu_addr) & - PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); - addr1 = upper_32_bits(cpu_addr); - } + addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | + (lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); + addr1 = upper_32_bits(pci_addr); + desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | AXI_WRAPPER_MEM_WRITE; - /* CPU bus address region */ + /* PCI bus address region */ rockchip_pcie_write(rockchip, addr0, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r)); + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); rockchip_pcie_write(rockchip, addr1, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r)); + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); + rockchip_pcie_write(rockchip, desc0, + ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); + rockchip_pcie_write(rockchip, 0, + ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); } static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, struct pci_epf_header *hdr) { + u32 reg; struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); struct rockchip_pcie *rockchip = &ep->rockchip; @@ -137,8 +105,9 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, PCIE_CORE_CONFIG_VENDOR); } - rockchip_pcie_write(rockchip, hdr->deviceid << 16, - ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID); + reg = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_DID_VID); + reg = (reg & 0xFFFF) | (hdr->deviceid << 16); + rockchip_pcie_write(rockchip, reg, PCIE_EP_CONFIG_DID_VID); rockchip_pcie_write(rockchip, hdr->revid | @@ -256,26 +225,20 @@ static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar)); } +static inline u32 rockchip_ob_region(phys_addr_t addr) +{ + return (addr >> ilog2(SZ_1M)) & 0x1f; +} + static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, phys_addr_t addr, u64 pci_addr, size_t size) { struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); struct rockchip_pcie *pcie = &ep->rockchip; - u32 r; - - r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); - /* - * Region 0 is reserved for configuration space and shouldn't - * be used elsewhere per TRM, so leave it out. - */ - if (r >= ep->max_regions - 1) { - dev_err(&epc->dev, "no free outbound region\n"); - return -EINVAL; - } + u32 r = rockchip_ob_region(addr); - rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, AXI_WRAPPER_MEM_WRITE, addr, - pci_addr, size); + rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, addr, pci_addr, size); set_bit(r, &ep->ob_region_map); ep->ob_addr[r] = addr; @@ -290,15 +253,11 @@ static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, struct rockchip_pcie *rockchip = &ep->rockchip; u32 r; - for (r = 0; r < ep->max_regions - 1; r++) + for (r = 0; r < ep->max_regions; r++) if (ep->ob_addr[r] == addr) break; - /* - * Region 0 is reserved for configuration space and shouldn't - * be used elsewhere per TRM, so leave it out. - */ - if (r == ep->max_regions - 1) + if (r == ep->max_regions) return; rockchip_pcie_clear_ep_ob_atu(rockchip, r); @@ -312,15 +271,15 @@ static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, { struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); struct rockchip_pcie *rockchip = &ep->rockchip; - u16 flags; + u32 flags; flags = rockchip_pcie_read(rockchip, ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK; flags |= - ((multi_msg_cap << 1) << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | - PCI_MSI_FLAGS_64BIT; + (multi_msg_cap << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | + (PCI_MSI_FLAGS_64BIT << ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET); flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP; rockchip_pcie_write(rockchip, flags, ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + @@ -332,7 +291,7 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn) { struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); struct rockchip_pcie *rockchip = &ep->rockchip; - u16 flags; + u32 flags; flags = rockchip_pcie_read(rockchip, ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + @@ -345,48 +304,25 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn) } static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn, - u8 intx, bool is_asserted) + u8 intx, bool do_assert) { struct rockchip_pcie *rockchip = &ep->rockchip; - u32 r = ep->max_regions - 1; - u32 offset; - u32 status; - u8 msg_code; - - if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR || - ep->irq_pci_fn != fn)) { - rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r, - AXI_WRAPPER_NOR_MSG, - ep->irq_phys_addr, 0, 0); - ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR; - ep->irq_pci_fn = fn; - } intx &= 3; - if (is_asserted) { + + if (do_assert) { ep->irq_pending |= BIT(intx); - msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx; + rockchip_pcie_write(rockchip, + PCIE_CLIENT_INT_IN_ASSERT | + PCIE_CLIENT_INT_PEND_ST_PEND, + PCIE_CLIENT_LEGACY_INT_CTRL); } else { ep->irq_pending &= ~BIT(intx); - msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx; + rockchip_pcie_write(rockchip, + PCIE_CLIENT_INT_IN_DEASSERT | + PCIE_CLIENT_INT_PEND_ST_NORMAL, + PCIE_CLIENT_LEGACY_INT_CTRL); } - - status = rockchip_pcie_read(rockchip, - ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + - ROCKCHIP_PCIE_EP_CMD_STATUS); - status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS; - - if ((status != 0) ^ (ep->irq_pending != 0)) { - status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS; - rockchip_pcie_write(rockchip, status, - ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + - ROCKCHIP_PCIE_EP_CMD_STATUS); - } - - offset = - ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) | - ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA; - writel(0, ep->irq_cpu_addr + offset); } static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn, @@ -416,9 +352,10 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn, u8 interrupt_num) { struct rockchip_pcie *rockchip = &ep->rockchip; - u16 flags, mme, data, data_mask; + u32 flags, mme, data, data_mask; u8 msi_count; - u64 pci_addr, pci_addr_mask = 0xff; + u64 pci_addr; + u32 r; /* Check MSI enable bit */ flags = rockchip_pcie_read(&ep->rockchip, @@ -452,21 +389,20 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn, ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + ROCKCHIP_PCIE_EP_MSI_CTRL_REG + PCI_MSI_ADDRESS_LO); - pci_addr &= GENMASK_ULL(63, 2); /* Set the outbound region if needed. */ - if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || + if (unlikely(ep->irq_pci_addr != (pci_addr & PCIE_ADDR_MASK) || ep->irq_pci_fn != fn)) { - rockchip_pcie_prog_ep_ob_atu(rockchip, fn, ep->max_regions - 1, - AXI_WRAPPER_MEM_WRITE, + r = rockchip_ob_region(ep->irq_phys_addr); + rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r, ep->irq_phys_addr, - pci_addr & ~pci_addr_mask, - pci_addr_mask + 1); - ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); + pci_addr & PCIE_ADDR_MASK, + ~PCIE_ADDR_MASK + 1); + ep->irq_pci_addr = (pci_addr & PCIE_ADDR_MASK); ep->irq_pci_fn = fn; } - writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); + writew(data, ep->irq_cpu_addr + (pci_addr & ~PCIE_ADDR_MASK)); return 0; } @@ -506,6 +442,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features = { .linkup_notifier = false, .msi_capable = true, .msix_capable = false, + .align = 256, }; static const struct pci_epc_features* @@ -547,6 +484,8 @@ static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip, if (err < 0 || ep->max_regions > MAX_REGION_LIMIT) ep->max_regions = MAX_REGION_LIMIT; + ep->ob_region_map = 0; + err = of_property_read_u8(dev->of_node, "max-functions", &ep->epc->max_functions); if (err < 0) @@ -567,7 +506,9 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) struct rockchip_pcie *rockchip; struct pci_epc *epc; size_t max_regions; - int err; + struct pci_epc_mem_window *windows = NULL; + int err, i; + u32 cfg_msi, cfg_msix_cp; ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); if (!ep) @@ -614,15 +555,27 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) /* Only enable function 0 by default */ rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG); - err = pci_epc_mem_init(epc, rockchip->mem_res->start, - resource_size(rockchip->mem_res), PAGE_SIZE); + windows = devm_kcalloc(dev, ep->max_regions, + sizeof(struct pci_epc_mem_window), GFP_KERNEL); + if (!windows) { + err = -ENOMEM; + goto err_uninit_port; + } + for (i = 0; i < ep->max_regions; i++) { + windows[i].phys_base = rockchip->mem_res->start + (SZ_1M * i); + windows[i].size = SZ_1M; + windows[i].page_size = SZ_1M; + } + err = pci_epc_multi_mem_init(epc, windows, ep->max_regions); + devm_kfree(dev, windows); + if (err < 0) { dev_err(dev, "failed to initialize the memory space\n"); goto err_uninit_port; } ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr, - SZ_128K); + SZ_1M); if (!ep->irq_cpu_addr) { dev_err(dev, "failed to reserve memory space for MSI\n"); err = -ENOMEM; @@ -631,6 +584,32 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR; + /* + * MSI-X is not supported but the controller still advertises the MSI-X + * capability by default, which can lead to the Root Complex side + * allocating MSI-X vectors which cannot be used. Avoid this by skipping + * the MSI-X capability entry in the PCIe capabilities linked-list: get + * the next pointer from the MSI-X entry and set that in the MSI + * capability entry (which is the previous entry). This way the MSI-X + * entry is skipped (left out of the linked-list) and not advertised. + */ + cfg_msi = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); + + cfg_msi &= ~ROCKCHIP_PCIE_EP_MSI_CP1_MASK; + + cfg_msix_cp = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + + ROCKCHIP_PCIE_EP_MSIX_CAP_REG) & + ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK; + + cfg_msi |= cfg_msix_cp; + + rockchip_pcie_write(rockchip, cfg_msi, + PCIE_EP_CONFIG_BASE + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); + + rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE, + PCIE_CLIENT_CONFIG); + return 0; err_epc_mem_exit: pci_epc_mem_exit(epc); diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c index c96c0f454570..2438bc9b3a1a 100644 --- a/drivers/pci/controller/pcie-rockchip-host.c +++ b/drivers/pci/controller/pcie-rockchip-host.c @@ -1009,7 +1009,7 @@ err_set_vpcie: return err; } -static int rockchip_pcie_remove(struct platform_device *pdev) +static void rockchip_pcie_remove(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct rockchip_pcie *rockchip = dev_get_drvdata(dev); @@ -1029,8 +1029,6 @@ static int rockchip_pcie_remove(struct platform_device *pdev) regulator_disable(rockchip->vpcie3v3); regulator_disable(rockchip->vpcie1v8); regulator_disable(rockchip->vpcie0v9); - - return 0; } static const struct dev_pm_ops rockchip_pcie_pm_ops = { @@ -1051,7 +1049,7 @@ static struct platform_driver rockchip_pcie_driver = { .pm = &rockchip_pcie_pm_ops, }, .probe = rockchip_pcie_probe, - .remove = rockchip_pcie_remove, + .remove_new = rockchip_pcie_remove, }; module_platform_driver(rockchip_pcie_driver); diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c index 990a00e08bc5..1aa84035a8bc 100644 --- a/drivers/pci/controller/pcie-rockchip.c +++ b/drivers/pci/controller/pcie-rockchip.c @@ -14,6 +14,7 @@ #include <linux/clk.h> #include <linux/delay.h> #include <linux/gpio/consumer.h> +#include <linux/iopoll.h> #include <linux/of_pci.h> #include <linux/phy/phy.h> #include <linux/platform_device.h> @@ -153,6 +154,12 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) } EXPORT_SYMBOL_GPL(rockchip_pcie_parse_dt); +#define rockchip_pcie_read_addr(addr) rockchip_pcie_read(rockchip, addr) +/* 100 ms max wait time for PHY PLLs to lock */ +#define RK_PHY_PLL_LOCK_TIMEOUT_US 100000 +/* Sleep should be less than 20ms */ +#define RK_PHY_PLL_LOCK_SLEEP_US 1000 + int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) { struct device *dev = rockchip->dev; @@ -254,6 +261,16 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) } } + err = readx_poll_timeout(rockchip_pcie_read_addr, + PCIE_CLIENT_SIDE_BAND_STATUS, + regs, !(regs & PCIE_CLIENT_PHY_ST), + RK_PHY_PLL_LOCK_SLEEP_US, + RK_PHY_PLL_LOCK_TIMEOUT_US); + if (err) { + dev_err(dev, "PHY PLLs could not lock, %d\n", err); + goto err_power_off_phy; + } + /* * Please don't reorder the deassert sequence of the following * four reset pins. diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h index 32c3a859c26b..fe0333778fd9 100644 --- a/drivers/pci/controller/pcie-rockchip.h +++ b/drivers/pci/controller/pcie-rockchip.h @@ -38,6 +38,13 @@ #define PCIE_CLIENT_MODE_EP HIWORD_UPDATE(0x0040, 0) #define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0) #define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080) +#define PCIE_CLIENT_LEGACY_INT_CTRL (PCIE_CLIENT_BASE + 0x0c) +#define PCIE_CLIENT_INT_IN_ASSERT HIWORD_UPDATE_BIT(0x0002) +#define PCIE_CLIENT_INT_IN_DEASSERT HIWORD_UPDATE(0x0002, 0) +#define PCIE_CLIENT_INT_PEND_ST_PEND HIWORD_UPDATE_BIT(0x0001) +#define PCIE_CLIENT_INT_PEND_ST_NORMAL HIWORD_UPDATE(0x0001, 0) +#define PCIE_CLIENT_SIDE_BAND_STATUS (PCIE_CLIENT_BASE + 0x20) +#define PCIE_CLIENT_PHY_ST BIT(12) #define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c) #define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0) #define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18 @@ -132,7 +139,10 @@ #define PCIE_RC_RP_ATS_BASE 0x400000 #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 +#define PCIE_EP_PF_CONFIG_REGS_BASE 0x800000 #define PCIE_RC_CONFIG_BASE 0xa00000 +#define PCIE_EP_CONFIG_BASE 0xa00000 +#define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00) #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 @@ -148,10 +158,11 @@ #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) +#define PCIE_ADDR_MASK 0xffffff00 #define PCIE_CORE_AXI_CONF_BASE 0xc00000 #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f -#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00 +#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK #define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4) #define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8) #define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc) @@ -159,7 +170,7 @@ #define PCIE_CORE_AXI_INBOUND_BASE 0xc00800 #define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0) #define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f -#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00 +#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK #define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4) /* Size of one AXI Region (not Region 0) */ @@ -216,21 +227,28 @@ #define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4 #define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19) #define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90 +#define ROCKCHIP_PCIE_EP_MSI_CP1_OFFSET 8 +#define ROCKCHIP_PCIE_EP_MSI_CP1_MASK GENMASK(15, 8) +#define ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET 16 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17) #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET 20 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK GENMASK(22, 20) #define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16) #define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24) +#define ROCKCHIP_PCIE_EP_MSIX_CAP_REG 0xb0 +#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_OFFSET 8 +#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK GENMASK(15, 8) #define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1 #define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3 -#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) +#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) \ + (PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12))) +#define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \ + (PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12))) #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \ - (PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008) + (PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008) #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \ - (PCIE_RC_RP_ATS_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008) -#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0000 + ((r) & 0x1f) * 0x0020) + (PCIE_CORE_AXI_CONF_BASE + 0x082c + (fn) * 0x0040 + (bar) * 0x0008) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ (((devfn) << 12) & \ @@ -238,20 +256,21 @@ #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ (((bus) << 20) & ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK) +#define PCIE_RC_EP_ATR_OB_REGIONS_1_32 (PCIE_CORE_AXI_CONF_BASE + 0x0020) +#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0000 + ((r) & 0x1f) * 0x0020) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0004 + ((r) & 0x1f) * 0x0020) + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0004 + ((r) & 0x1f) * 0x0020) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \ (((devfn) << 24) & ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0008 + ((r) & 0x1f) * 0x0020) -#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ - (PCIE_RC_RP_ATS_BASE + 0x000c + ((r) & 0x1f) * 0x0020) -#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0018 + ((r) & 0x1f) * 0x0020) -#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r) \ - (PCIE_RC_RP_ATS_BASE + 0x001c + ((r) & 0x1f) * 0x0020) + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0008 + ((r) & 0x1f) * 0x0020) +#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x000c + ((r) & 0x1f) * 0x0020) +#define ROCKCHIP_PCIE_AT_OB_REGION_DESC2(r) \ + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0010 + ((r) & 0x1f) * 0x0020) #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn) \ (PCIE_CORE_CTRL_MGMT_BASE + 0x0240 + (fn) * 0x0008) diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 990630ec57c6..e718a816d481 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -927,7 +927,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) if (!list_empty(&child->devices)) { dev = list_first_entry(&child->devices, struct pci_dev, bus_list); - if (pci_reset_bus(dev)) + ret = pci_reset_bus(dev); + if (ret) pci_warn(dev, "can't reset device: %d\n", ret); break; @@ -1036,6 +1037,13 @@ static void vmd_remove(struct pci_dev *dev) ida_simple_remove(&vmd_instance_ida, vmd->instance); } +static void vmd_shutdown(struct pci_dev *dev) +{ + struct vmd_dev *vmd = pci_get_drvdata(dev); + + vmd_remove_irq_domain(vmd); +} + #ifdef CONFIG_PM_SLEEP static int vmd_suspend(struct device *dev) { @@ -1101,6 +1109,7 @@ static struct pci_driver vmd_drv = { .id_table = vmd_ids, .probe = vmd_probe, .remove = vmd_remove, + .shutdown = vmd_shutdown, .driver = { .pm = &vmd_dev_pm_ops, }, diff --git a/drivers/pci/endpoint/functions/Kconfig b/drivers/pci/endpoint/functions/Kconfig index 9fd560886871..0c9cea0698d7 100644 --- a/drivers/pci/endpoint/functions/Kconfig +++ b/drivers/pci/endpoint/functions/Kconfig @@ -27,7 +27,7 @@ config PCI_EPF_NTB If in doubt, say "N" to disable Endpoint NTB driver. config PCI_EPF_VNTB - tristate "PCI Endpoint NTB driver" + tristate "PCI Endpoint Virtual NTB driver" depends on PCI_ENDPOINT depends on NTB select CONFIGFS_FS @@ -37,3 +37,13 @@ config PCI_EPF_VNTB between PCI Root Port and PCIe Endpoint. If in doubt, say "N" to disable Endpoint NTB driver. + +config PCI_EPF_MHI + tristate "PCI Endpoint driver for MHI bus" + depends on PCI_ENDPOINT && MHI_BUS_EP + help + Enable this configuration option to enable the PCI Endpoint + driver for Modem Host Interface (MHI) bus in Qualcomm Endpoint + devices such as SDX55. + + If in doubt, say "N" to disable Endpoint driver for MHI bus. diff --git a/drivers/pci/endpoint/functions/Makefile b/drivers/pci/endpoint/functions/Makefile index 5c13001deaba..696473fce50e 100644 --- a/drivers/pci/endpoint/functions/Makefile +++ b/drivers/pci/endpoint/functions/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o obj-$(CONFIG_PCI_EPF_VNTB) += pci-epf-vntb.o +obj-$(CONFIG_PCI_EPF_MHI) += pci-epf-mhi.o diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c new file mode 100644 index 000000000000..9c1f5a154fbd --- /dev/null +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c @@ -0,0 +1,458 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCI EPF driver for MHI Endpoint devices + * + * Copyright (C) 2023 Linaro Ltd. + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> + */ + +#include <linux/mhi_ep.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/pci-epc.h> +#include <linux/pci-epf.h> + +#define MHI_VERSION_1_0 0x01000000 + +#define to_epf_mhi(cntrl) container_of(cntrl, struct pci_epf_mhi, cntrl) + +struct pci_epf_mhi_ep_info { + const struct mhi_ep_cntrl_config *config; + struct pci_epf_header *epf_header; + enum pci_barno bar_num; + u32 epf_flags; + u32 msi_count; + u32 mru; +}; + +#define MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, direction) \ + { \ + .num = ch_num, \ + .name = ch_name, \ + .dir = direction, \ + } + +#define MHI_EP_CHANNEL_CONFIG_UL(ch_num, ch_name) \ + MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, DMA_TO_DEVICE) + +#define MHI_EP_CHANNEL_CONFIG_DL(ch_num, ch_name) \ + MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, DMA_FROM_DEVICE) + +static const struct mhi_ep_channel_config mhi_v1_channels[] = { + MHI_EP_CHANNEL_CONFIG_UL(0, "LOOPBACK"), + MHI_EP_CHANNEL_CONFIG_DL(1, "LOOPBACK"), + MHI_EP_CHANNEL_CONFIG_UL(2, "SAHARA"), + MHI_EP_CHANNEL_CONFIG_DL(3, "SAHARA"), + MHI_EP_CHANNEL_CONFIG_UL(4, "DIAG"), + MHI_EP_CHANNEL_CONFIG_DL(5, "DIAG"), + MHI_EP_CHANNEL_CONFIG_UL(6, "SSR"), + MHI_EP_CHANNEL_CONFIG_DL(7, "SSR"), + MHI_EP_CHANNEL_CONFIG_UL(8, "QDSS"), + MHI_EP_CHANNEL_CONFIG_DL(9, "QDSS"), + MHI_EP_CHANNEL_CONFIG_UL(10, "EFS"), + MHI_EP_CHANNEL_CONFIG_DL(11, "EFS"), + MHI_EP_CHANNEL_CONFIG_UL(12, "MBIM"), + MHI_EP_CHANNEL_CONFIG_DL(13, "MBIM"), + MHI_EP_CHANNEL_CONFIG_UL(14, "QMI"), + MHI_EP_CHANNEL_CONFIG_DL(15, "QMI"), + MHI_EP_CHANNEL_CONFIG_UL(16, "QMI"), + MHI_EP_CHANNEL_CONFIG_DL(17, "QMI"), + MHI_EP_CHANNEL_CONFIG_UL(18, "IP-CTRL-1"), + MHI_EP_CHANNEL_CONFIG_DL(19, "IP-CTRL-1"), + MHI_EP_CHANNEL_CONFIG_UL(20, "IPCR"), + MHI_EP_CHANNEL_CONFIG_DL(21, "IPCR"), + MHI_EP_CHANNEL_CONFIG_UL(32, "DUN"), + MHI_EP_CHANNEL_CONFIG_DL(33, "DUN"), + MHI_EP_CHANNEL_CONFIG_UL(46, "IP_SW0"), + MHI_EP_CHANNEL_CONFIG_DL(47, "IP_SW0"), +}; + +static const struct mhi_ep_cntrl_config mhi_v1_config = { + .max_channels = 128, + .num_channels = ARRAY_SIZE(mhi_v1_channels), + .ch_cfg = mhi_v1_channels, + .mhi_version = MHI_VERSION_1_0, +}; + +static struct pci_epf_header sdx55_header = { + .vendorid = PCI_VENDOR_ID_QCOM, + .deviceid = 0x0306, + .baseclass_code = PCI_BASE_CLASS_COMMUNICATION, + .subclass_code = PCI_CLASS_COMMUNICATION_MODEM & 0xff, + .interrupt_pin = PCI_INTERRUPT_INTA, +}; + +static const struct pci_epf_mhi_ep_info sdx55_info = { + .config = &mhi_v1_config, + .epf_header = &sdx55_header, + .bar_num = BAR_0, + .epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32, + .msi_count = 32, + .mru = 0x8000, +}; + +struct pci_epf_mhi { + const struct pci_epf_mhi_ep_info *info; + struct mhi_ep_cntrl mhi_cntrl; + struct pci_epf *epf; + struct mutex lock; + void __iomem *mmio; + resource_size_t mmio_phys; + u32 mmio_size; + int irq; +}; + +static int __pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, + phys_addr_t *paddr, void __iomem **vaddr, + size_t offset, size_t size) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + struct pci_epf *epf = epf_mhi->epf; + struct pci_epc *epc = epf->epc; + int ret; + + *vaddr = pci_epc_mem_alloc_addr(epc, paddr, size + offset); + if (!*vaddr) + return -ENOMEM; + + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, *paddr, + pci_addr - offset, size + offset); + if (ret) { + pci_epc_mem_free_addr(epc, *paddr, *vaddr, size + offset); + return ret; + } + + *paddr = *paddr + offset; + *vaddr = *vaddr + offset; + + return 0; +} + +static int pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, + phys_addr_t *paddr, void __iomem **vaddr, + size_t size) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + struct pci_epc *epc = epf_mhi->epf->epc; + size_t offset = pci_addr & (epc->mem->window.page_size - 1); + + return __pci_epf_mhi_alloc_map(mhi_cntrl, pci_addr, paddr, vaddr, + offset, size); +} + +static void __pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, + u64 pci_addr, phys_addr_t paddr, + void __iomem *vaddr, size_t offset, + size_t size) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + struct pci_epf *epf = epf_mhi->epf; + struct pci_epc *epc = epf->epc; + + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, paddr - offset); + pci_epc_mem_free_addr(epc, paddr - offset, vaddr - offset, + size + offset); +} + +static void pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, + phys_addr_t paddr, void __iomem *vaddr, + size_t size) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + struct pci_epf *epf = epf_mhi->epf; + struct pci_epc *epc = epf->epc; + size_t offset = pci_addr & (epc->mem->window.page_size - 1); + + __pci_epf_mhi_unmap_free(mhi_cntrl, pci_addr, paddr, vaddr, offset, + size); +} + +static void pci_epf_mhi_raise_irq(struct mhi_ep_cntrl *mhi_cntrl, u32 vector) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + struct pci_epf *epf = epf_mhi->epf; + struct pci_epc *epc = epf->epc; + + /* + * MHI supplies 0 based MSI vectors but the API expects the vector + * number to start from 1, so we need to increment the vector by 1. + */ + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, PCI_EPC_IRQ_MSI, + vector + 1); +} + +static int pci_epf_mhi_read_from_host(struct mhi_ep_cntrl *mhi_cntrl, u64 from, + void *to, size_t size) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + size_t offset = from % SZ_4K; + void __iomem *tre_buf; + phys_addr_t tre_phys; + int ret; + + mutex_lock(&epf_mhi->lock); + + ret = __pci_epf_mhi_alloc_map(mhi_cntrl, from, &tre_phys, &tre_buf, + offset, size); + if (ret) { + mutex_unlock(&epf_mhi->lock); + return ret; + } + + memcpy_fromio(to, tre_buf, size); + + __pci_epf_mhi_unmap_free(mhi_cntrl, from, tre_phys, tre_buf, offset, + size); + + mutex_unlock(&epf_mhi->lock); + + return 0; +} + +static int pci_epf_mhi_write_to_host(struct mhi_ep_cntrl *mhi_cntrl, + void *from, u64 to, size_t size) +{ + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); + size_t offset = to % SZ_4K; + void __iomem *tre_buf; + phys_addr_t tre_phys; + int ret; + + mutex_lock(&epf_mhi->lock); + + ret = __pci_epf_mhi_alloc_map(mhi_cntrl, to, &tre_phys, &tre_buf, + offset, size); + if (ret) { + mutex_unlock(&epf_mhi->lock); + return ret; + } + + memcpy_toio(tre_buf, from, size); + + __pci_epf_mhi_unmap_free(mhi_cntrl, to, tre_phys, tre_buf, offset, + size); + + mutex_unlock(&epf_mhi->lock); + + return 0; +} + +static int pci_epf_mhi_core_init(struct pci_epf *epf) +{ + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; + struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num]; + struct pci_epc *epc = epf->epc; + struct device *dev = &epf->dev; + int ret; + + epf_bar->phys_addr = epf_mhi->mmio_phys; + epf_bar->size = epf_mhi->mmio_size; + epf_bar->barno = info->bar_num; + epf_bar->flags = info->epf_flags; + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, epf_bar); + if (ret) { + dev_err(dev, "Failed to set BAR: %d\n", ret); + return ret; + } + + ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no, + order_base_2(info->msi_count)); + if (ret) { + dev_err(dev, "Failed to set MSI configuration: %d\n", ret); + return ret; + } + + ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, + epf->header); + if (ret) { + dev_err(dev, "Failed to set Configuration header: %d\n", ret); + return ret; + } + + return 0; +} + +static int pci_epf_mhi_link_up(struct pci_epf *epf) +{ + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; + struct pci_epc *epc = epf->epc; + struct device *dev = &epf->dev; + int ret; + + mhi_cntrl->mmio = epf_mhi->mmio; + mhi_cntrl->irq = epf_mhi->irq; + mhi_cntrl->mru = info->mru; + + /* Assign the struct dev of PCI EP as MHI controller device */ + mhi_cntrl->cntrl_dev = epc->dev.parent; + mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq; + mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map; + mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free; + mhi_cntrl->read_from_host = pci_epf_mhi_read_from_host; + mhi_cntrl->write_to_host = pci_epf_mhi_write_to_host; + + /* Register the MHI EP controller */ + ret = mhi_ep_register_controller(mhi_cntrl, info->config); + if (ret) { + dev_err(dev, "Failed to register MHI EP controller: %d\n", ret); + return ret; + } + + return 0; +} + +static int pci_epf_mhi_link_down(struct pci_epf *epf) +{ + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; + + if (mhi_cntrl->mhi_dev) { + mhi_ep_power_down(mhi_cntrl); + mhi_ep_unregister_controller(mhi_cntrl); + } + + return 0; +} + +static int pci_epf_mhi_bme(struct pci_epf *epf) +{ + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; + struct device *dev = &epf->dev; + int ret; + + /* + * Power up the MHI EP stack if link is up and stack is in power down + * state. + */ + if (!mhi_cntrl->enabled && mhi_cntrl->mhi_dev) { + ret = mhi_ep_power_up(mhi_cntrl); + if (ret) { + dev_err(dev, "Failed to power up MHI EP: %d\n", ret); + mhi_ep_unregister_controller(mhi_cntrl); + } + } + + return 0; +} + +static int pci_epf_mhi_bind(struct pci_epf *epf) +{ + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); + struct pci_epc *epc = epf->epc; + struct platform_device *pdev = to_platform_device(epc->dev.parent); + struct resource *res; + int ret; + + /* Get MMIO base address from Endpoint controller */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio"); + epf_mhi->mmio_phys = res->start; + epf_mhi->mmio_size = resource_size(res); + + epf_mhi->mmio = ioremap(epf_mhi->mmio_phys, epf_mhi->mmio_size); + if (!epf_mhi->mmio) + return -ENOMEM; + + ret = platform_get_irq_byname(pdev, "doorbell"); + if (ret < 0) { + iounmap(epf_mhi->mmio); + return ret; + } + + epf_mhi->irq = ret; + + return 0; +} + +static void pci_epf_mhi_unbind(struct pci_epf *epf) +{ + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; + struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num]; + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; + struct pci_epc *epc = epf->epc; + + /* + * Forcefully power down the MHI EP stack. Only way to bring the MHI EP + * stack back to working state after successive bind is by getting BME + * from host. + */ + if (mhi_cntrl->mhi_dev) { + mhi_ep_power_down(mhi_cntrl); + mhi_ep_unregister_controller(mhi_cntrl); + } + + iounmap(epf_mhi->mmio); + pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, epf_bar); +} + +static struct pci_epc_event_ops pci_epf_mhi_event_ops = { + .core_init = pci_epf_mhi_core_init, + .link_up = pci_epf_mhi_link_up, + .link_down = pci_epf_mhi_link_down, + .bme = pci_epf_mhi_bme, +}; + +static int pci_epf_mhi_probe(struct pci_epf *epf, + const struct pci_epf_device_id *id) +{ + struct pci_epf_mhi_ep_info *info = + (struct pci_epf_mhi_ep_info *)id->driver_data; + struct pci_epf_mhi *epf_mhi; + struct device *dev = &epf->dev; + + epf_mhi = devm_kzalloc(dev, sizeof(*epf_mhi), GFP_KERNEL); + if (!epf_mhi) + return -ENOMEM; + + epf->header = info->epf_header; + epf_mhi->info = info; + epf_mhi->epf = epf; + + epf->event_ops = &pci_epf_mhi_event_ops; + + mutex_init(&epf_mhi->lock); + + epf_set_drvdata(epf, epf_mhi); + + return 0; +} + +static const struct pci_epf_device_id pci_epf_mhi_ids[] = { + { + .name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info, + }, + {}, +}; + +static struct pci_epf_ops pci_epf_mhi_ops = { + .unbind = pci_epf_mhi_unbind, + .bind = pci_epf_mhi_bind, +}; + +static struct pci_epf_driver pci_epf_mhi_driver = { + .driver.name = "pci_epf_mhi", + .probe = pci_epf_mhi_probe, + .id_table = pci_epf_mhi_ids, + .ops = &pci_epf_mhi_ops, + .owner = THIS_MODULE, +}; + +static int __init pci_epf_mhi_init(void) +{ + return pci_epf_register_driver(&pci_epf_mhi_driver); +} +module_init(pci_epf_mhi_init); + +static void __exit pci_epf_mhi_exit(void) +{ + pci_epf_unregister_driver(&pci_epf_mhi_driver); +} +module_exit(pci_epf_mhi_exit); + +MODULE_DESCRIPTION("PCI EPF driver for MHI Endpoint devices"); +MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>"); +MODULE_LICENSE("GPL"); diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/endpoint/functions/pci-epf-ntb.c index 9a00448c7e61..9aac2c6f3bb9 100644 --- a/drivers/pci/endpoint/functions/pci-epf-ntb.c +++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c @@ -2075,11 +2075,13 @@ static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf, /** * epf_ntb_probe() - Probe NTB function driver * @epf: NTB endpoint function device + * @id: NTB endpoint function device ID * * Probe NTB function driver when endpoint function bus detects a NTB * endpoint function. */ -static int epf_ntb_probe(struct pci_epf *epf) +static int epf_ntb_probe(struct pci_epf *epf, + const struct pci_epf_device_id *id) { struct epf_ntb *ntb; struct device *dev; diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 0f9d2ec822ac..1f0d2b84296a 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -54,6 +54,9 @@ struct pci_epf_test { struct delayed_work cmd_handler; struct dma_chan *dma_chan_tx; struct dma_chan *dma_chan_rx; + struct dma_chan *transfer_chan; + dma_cookie_t transfer_cookie; + enum dma_status transfer_status; struct completion transfer_complete; bool dma_supported; bool dma_private; @@ -85,8 +88,14 @@ static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; static void pci_epf_test_dma_callback(void *param) { struct pci_epf_test *epf_test = param; - - complete(&epf_test->transfer_complete); + struct dma_tx_state state; + + epf_test->transfer_status = + dmaengine_tx_status(epf_test->transfer_chan, + epf_test->transfer_cookie, &state); + if (epf_test->transfer_status == DMA_COMPLETE || + epf_test->transfer_status == DMA_ERROR) + complete(&epf_test->transfer_complete); } /** @@ -112,7 +121,7 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test, size_t len, dma_addr_t dma_remote, enum dma_transfer_direction dir) { - struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ? + struct dma_chan *chan = (dir == DMA_MEM_TO_DEV) ? epf_test->dma_chan_tx : epf_test->dma_chan_rx; dma_addr_t dma_local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst; enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; @@ -120,7 +129,6 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test, struct dma_async_tx_descriptor *tx; struct dma_slave_config sconf = {}; struct device *dev = &epf->dev; - dma_cookie_t cookie; int ret; if (IS_ERR_OR_NULL(chan)) { @@ -151,26 +159,34 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test, return -EIO; } + reinit_completion(&epf_test->transfer_complete); + epf_test->transfer_chan = chan; tx->callback = pci_epf_test_dma_callback; tx->callback_param = epf_test; - cookie = tx->tx_submit(tx); - reinit_completion(&epf_test->transfer_complete); + epf_test->transfer_cookie = dmaengine_submit(tx); - ret = dma_submit_error(cookie); + ret = dma_submit_error(epf_test->transfer_cookie); if (ret) { - dev_err(dev, "Failed to do DMA tx_submit %d\n", cookie); - return -EIO; + dev_err(dev, "Failed to do DMA tx_submit %d\n", ret); + goto terminate; } dma_async_issue_pending(chan); ret = wait_for_completion_interruptible(&epf_test->transfer_complete); if (ret < 0) { - dmaengine_terminate_sync(chan); - dev_err(dev, "DMA wait_for_completion_timeout\n"); - return -ETIMEDOUT; + dev_err(dev, "DMA wait_for_completion interrupted\n"); + goto terminate; } - return 0; + if (epf_test->transfer_status == DMA_ERROR) { + dev_err(dev, "DMA transfer failed\n"); + ret = -EIO; + } + +terminate: + dmaengine_terminate_sync(chan); + + return ret; } struct epf_dma_filter { @@ -279,40 +295,29 @@ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test) return; } -static void pci_epf_test_print_rate(const char *ops, u64 size, +static void pci_epf_test_print_rate(struct pci_epf_test *epf_test, + const char *op, u64 size, struct timespec64 *start, struct timespec64 *end, bool dma) { - struct timespec64 ts; - u64 rate, ns; - - ts = timespec64_sub(*end, *start); - - /* convert both size (stored in 'rate') and time in terms of 'ns' */ - ns = timespec64_to_ns(&ts); - rate = size * NSEC_PER_SEC; - - /* Divide both size (stored in 'rate') and ns by a common factor */ - while (ns > UINT_MAX) { - rate >>= 1; - ns >>= 1; - } - - if (!ns) - return; + struct timespec64 ts = timespec64_sub(*end, *start); + u64 rate = 0, ns; /* calculate the rate */ - do_div(rate, (uint32_t)ns); + ns = timespec64_to_ns(&ts); + if (ns) + rate = div64_u64(size * NSEC_PER_SEC, ns * 1000); - pr_info("\n%s => Size: %llu bytes\t DMA: %s\t Time: %llu.%09u seconds\t" - "Rate: %llu KB/s\n", ops, size, dma ? "YES" : "NO", - (u64)ts.tv_sec, (u32)ts.tv_nsec, rate / 1024); + dev_info(&epf_test->epf->dev, + "%s => Size: %llu B, DMA: %s, Time: %llu.%09u s, Rate: %llu KB/s\n", + op, size, dma ? "YES" : "NO", + (u64)ts.tv_sec, (u32)ts.tv_nsec, rate); } -static int pci_epf_test_copy(struct pci_epf_test *epf_test) +static void pci_epf_test_copy(struct pci_epf_test *epf_test, + struct pci_epf_test_reg *reg) { int ret; - bool use_dma; void __iomem *src_addr; void __iomem *dst_addr; phys_addr_t src_phys_addr; @@ -321,8 +326,6 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test) struct pci_epf *epf = epf_test->epf; struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; - enum pci_barno test_reg_bar = epf_test->test_reg_bar; - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size); if (!src_addr) { @@ -357,14 +360,7 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test) } ktime_get_ts64(&start); - use_dma = !!(reg->flags & FLAG_USE_DMA); - if (use_dma) { - if (!epf_test->dma_supported) { - dev_err(dev, "Cannot transfer data using DMA\n"); - ret = -EINVAL; - goto err_map_addr; - } - + if (reg->flags & FLAG_USE_DMA) { if (epf_test->dma_private) { dev_err(dev, "Cannot transfer data using DMA\n"); ret = -EINVAL; @@ -390,7 +386,8 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test) kfree(buf); } ktime_get_ts64(&end); - pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma); + pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start, &end, + reg->flags & FLAG_USE_DMA); err_map_addr: pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr); @@ -405,16 +402,19 @@ err_src_addr: pci_epc_mem_free_addr(epc, src_phys_addr, src_addr, reg->size); err: - return ret; + if (!ret) + reg->status |= STATUS_COPY_SUCCESS; + else + reg->status |= STATUS_COPY_FAIL; } -static int pci_epf_test_read(struct pci_epf_test *epf_test) +static void pci_epf_test_read(struct pci_epf_test *epf_test, + struct pci_epf_test_reg *reg) { int ret; void __iomem *src_addr; void *buf; u32 crc32; - bool use_dma; phys_addr_t phys_addr; phys_addr_t dst_phys_addr; struct timespec64 start, end; @@ -422,8 +422,6 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; struct device *dma_dev = epf->epc->dev.parent; - enum pci_barno test_reg_bar = epf_test->test_reg_bar; - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); if (!src_addr) { @@ -447,14 +445,7 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) goto err_map_addr; } - use_dma = !!(reg->flags & FLAG_USE_DMA); - if (use_dma) { - if (!epf_test->dma_supported) { - dev_err(dev, "Cannot transfer data using DMA\n"); - ret = -EINVAL; - goto err_dma_map; - } - + if (reg->flags & FLAG_USE_DMA) { dst_phys_addr = dma_map_single(dma_dev, buf, reg->size, DMA_FROM_DEVICE); if (dma_mapping_error(dma_dev, dst_phys_addr)) { @@ -479,7 +470,8 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) ktime_get_ts64(&end); } - pci_epf_test_print_rate("READ", reg->size, &start, &end, use_dma); + pci_epf_test_print_rate(epf_test, "READ", reg->size, &start, &end, + reg->flags & FLAG_USE_DMA); crc32 = crc32_le(~0, buf, reg->size); if (crc32 != reg->checksum) @@ -495,15 +487,18 @@ err_addr: pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size); err: - return ret; + if (!ret) + reg->status |= STATUS_READ_SUCCESS; + else + reg->status |= STATUS_READ_FAIL; } -static int pci_epf_test_write(struct pci_epf_test *epf_test) +static void pci_epf_test_write(struct pci_epf_test *epf_test, + struct pci_epf_test_reg *reg) { int ret; void __iomem *dst_addr; void *buf; - bool use_dma; phys_addr_t phys_addr; phys_addr_t src_phys_addr; struct timespec64 start, end; @@ -511,8 +506,6 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; struct device *dma_dev = epf->epc->dev.parent; - enum pci_barno test_reg_bar = epf_test->test_reg_bar; - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); if (!dst_addr) { @@ -539,14 +532,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) get_random_bytes(buf, reg->size); reg->checksum = crc32_le(~0, buf, reg->size); - use_dma = !!(reg->flags & FLAG_USE_DMA); - if (use_dma) { - if (!epf_test->dma_supported) { - dev_err(dev, "Cannot transfer data using DMA\n"); - ret = -EINVAL; - goto err_dma_map; - } - + if (reg->flags & FLAG_USE_DMA) { src_phys_addr = dma_map_single(dma_dev, buf, reg->size, DMA_TO_DEVICE); if (dma_mapping_error(dma_dev, src_phys_addr)) { @@ -573,7 +559,8 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) ktime_get_ts64(&end); } - pci_epf_test_print_rate("WRITE", reg->size, &start, &end, use_dma); + pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start, &end, + reg->flags & FLAG_USE_DMA); /* * wait 1ms inorder for the write to complete. Without this delay L3 @@ -591,32 +578,51 @@ err_addr: pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size); err: - return ret; + if (!ret) + reg->status |= STATUS_WRITE_SUCCESS; + else + reg->status |= STATUS_WRITE_FAIL; } -static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type, - u16 irq) +static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, + struct pci_epf_test_reg *reg) { struct pci_epf *epf = epf_test->epf; struct device *dev = &epf->dev; struct pci_epc *epc = epf->epc; - enum pci_barno test_reg_bar = epf_test->test_reg_bar; - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; + u32 status = reg->status | STATUS_IRQ_RAISED; + int count; - reg->status |= STATUS_IRQ_RAISED; + /* + * Set the status before raising the IRQ to ensure that the host sees + * the updated value when it gets the IRQ. + */ + WRITE_ONCE(reg->status, status); - switch (irq_type) { + switch (reg->irq_type) { case IRQ_TYPE_LEGACY: pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, PCI_EPC_IRQ_LEGACY, 0); break; case IRQ_TYPE_MSI: + count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no); + if (reg->irq_number > count || count <= 0) { + dev_err(dev, "Invalid MSI IRQ number %d / %d\n", + reg->irq_number, count); + return; + } pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, - PCI_EPC_IRQ_MSI, irq); + PCI_EPC_IRQ_MSI, reg->irq_number); break; case IRQ_TYPE_MSIX: + count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no); + if (reg->irq_number > count || count <= 0) { + dev_err(dev, "Invalid MSIX IRQ number %d / %d\n", + reg->irq_number, count); + return; + } pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, - PCI_EPC_IRQ_MSIX, irq); + PCI_EPC_IRQ_MSIX, reg->irq_number); break; default: dev_err(dev, "Failed to raise IRQ, unknown type\n"); @@ -626,87 +632,53 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type, static void pci_epf_test_cmd_handler(struct work_struct *work) { - int ret; - int count; u32 command; struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test, cmd_handler.work); struct pci_epf *epf = epf_test->epf; struct device *dev = &epf->dev; - struct pci_epc *epc = epf->epc; enum pci_barno test_reg_bar = epf_test->test_reg_bar; struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; - command = reg->command; + command = READ_ONCE(reg->command); if (!command) goto reset_handler; - reg->command = 0; - reg->status = 0; + WRITE_ONCE(reg->command, 0); + WRITE_ONCE(reg->status, 0); - if (reg->irq_type > IRQ_TYPE_MSIX) { - dev_err(dev, "Failed to detect IRQ type\n"); + if ((READ_ONCE(reg->flags) & FLAG_USE_DMA) && + !epf_test->dma_supported) { + dev_err(dev, "Cannot transfer data using DMA\n"); goto reset_handler; } - if (command & COMMAND_RAISE_LEGACY_IRQ) { - reg->status = STATUS_IRQ_RAISED; - pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, - PCI_EPC_IRQ_LEGACY, 0); - goto reset_handler; - } - - if (command & COMMAND_WRITE) { - ret = pci_epf_test_write(epf_test); - if (ret) - reg->status |= STATUS_WRITE_FAIL; - else - reg->status |= STATUS_WRITE_SUCCESS; - pci_epf_test_raise_irq(epf_test, reg->irq_type, - reg->irq_number); - goto reset_handler; - } - - if (command & COMMAND_READ) { - ret = pci_epf_test_read(epf_test); - if (!ret) - reg->status |= STATUS_READ_SUCCESS; - else - reg->status |= STATUS_READ_FAIL; - pci_epf_test_raise_irq(epf_test, reg->irq_type, - reg->irq_number); - goto reset_handler; - } - - if (command & COMMAND_COPY) { - ret = pci_epf_test_copy(epf_test); - if (!ret) - reg->status |= STATUS_COPY_SUCCESS; - else - reg->status |= STATUS_COPY_FAIL; - pci_epf_test_raise_irq(epf_test, reg->irq_type, - reg->irq_number); - goto reset_handler; - } - - if (command & COMMAND_RAISE_MSI_IRQ) { - count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no); - if (reg->irq_number > count || count <= 0) - goto reset_handler; - reg->status = STATUS_IRQ_RAISED; - pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, - PCI_EPC_IRQ_MSI, reg->irq_number); + if (reg->irq_type > IRQ_TYPE_MSIX) { + dev_err(dev, "Failed to detect IRQ type\n"); goto reset_handler; } - if (command & COMMAND_RAISE_MSIX_IRQ) { - count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no); - if (reg->irq_number > count || count <= 0) - goto reset_handler; - reg->status = STATUS_IRQ_RAISED; - pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, - PCI_EPC_IRQ_MSIX, reg->irq_number); - goto reset_handler; + switch (command) { + case COMMAND_RAISE_LEGACY_IRQ: + case COMMAND_RAISE_MSI_IRQ: + case COMMAND_RAISE_MSIX_IRQ: + pci_epf_test_raise_irq(epf_test, reg); + break; + case COMMAND_WRITE: + pci_epf_test_write(epf_test, reg); + pci_epf_test_raise_irq(epf_test, reg); + break; + case COMMAND_READ: + pci_epf_test_read(epf_test, reg); + pci_epf_test_raise_irq(epf_test, reg); + break; + case COMMAND_COPY: + pci_epf_test_copy(epf_test, reg); + pci_epf_test_raise_irq(epf_test, reg); + break; + default: + dev_err(dev, "Invalid command 0x%x\n", command); + break; } reset_handler: @@ -980,7 +952,8 @@ static const struct pci_epf_device_id pci_epf_test_ids[] = { {}, }; -static int pci_epf_test_probe(struct pci_epf *epf) +static int pci_epf_test_probe(struct pci_epf *epf, + const struct pci_epf_device_id *id) { struct pci_epf_test *epf_test; struct device *dev = &epf->dev; diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c index b7c7a8af99f4..0f5c8f8be847 100644 --- a/drivers/pci/endpoint/functions/pci-epf-vntb.c +++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c @@ -84,15 +84,15 @@ enum epf_ntb_bar { * | | * | | * | | - * +-----------------------+--------------------------+ Base+span_offset + * +-----------------------+--------------------------+ Base+spad_offset * | | | - * | Peer Span Space | Span Space | + * | Peer Spad Space | Spad Space | * | | | * | | | - * +-----------------------+--------------------------+ Base+span_offset - * | | | +span_count * 4 + * +-----------------------+--------------------------+ Base+spad_offset + * | | | +spad_count * 4 * | | | - * | Span Space | Peer Span Space | + * | Spad Space | Peer Spad Space | * | | | * +-----------------------+--------------------------+ * Virtual PCI PCIe Endpoint @@ -1395,13 +1395,15 @@ static struct pci_epf_ops epf_ntb_ops = { /** * epf_ntb_probe() - Probe NTB function driver * @epf: NTB endpoint function device + * @id: NTB endpoint function device ID * * Probe NTB function driver when endpoint function bus detects a NTB * endpoint function. * * Returns: Zero for success, or an error code in case of failure */ -static int epf_ntb_probe(struct pci_epf *epf) +static int epf_ntb_probe(struct pci_epf *epf, + const struct pci_epf_device_id *id) { struct epf_ntb *ntb; struct device *dev; diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c index 4b8ac0ac84d5..0ea64e24ed61 100644 --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c @@ -23,6 +23,7 @@ struct pci_epf_group { struct config_group group; struct config_group primary_epc_group; struct config_group secondary_epc_group; + struct config_group *type_group; struct delayed_work cfs_work; struct pci_epf *epf; int index; @@ -178,6 +179,9 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page, if (kstrtobool(page, &start) < 0) return -EINVAL; + if (start == epc_group->start) + return -EALREADY; + if (!start) { pci_epc_stop(epc); epc_group->start = 0; @@ -502,33 +506,64 @@ static struct configfs_item_operations pci_epf_ops = { .release = pci_epf_release, }; -static struct config_group *pci_epf_type_make(struct config_group *group, - const char *name) +static const struct config_item_type pci_epf_type = { + .ct_item_ops = &pci_epf_ops, + .ct_attrs = pci_epf_attrs, + .ct_owner = THIS_MODULE, +}; + +/** + * pci_epf_type_add_cfs() - Help function drivers to expose function specific + * attributes in configfs + * @epf: the EPF device that has to be configured using configfs + * @group: the parent configfs group (corresponding to entries in + * pci_epf_device_id) + * + * Invoke to expose function specific attributes in configfs. + * + * Return: A pointer to a config_group structure or NULL if the function driver + * does not have anything to expose (attributes configured by user) or if + * the function driver does not implement the add_cfs() method. + * + * Returns an error pointer if this function is called for an unbound EPF device + * or if the EPF driver add_cfs() method fails. + */ +static struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, + struct config_group *group) { - struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item); struct config_group *epf_type_group; - epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group); + if (!epf->driver) { + dev_err(&epf->dev, "epf device not bound to driver\n"); + return ERR_PTR(-ENODEV); + } + + if (!epf->driver->ops->add_cfs) + return NULL; + + mutex_lock(&epf->lock); + epf_type_group = epf->driver->ops->add_cfs(epf, group); + mutex_unlock(&epf->lock); + return epf_type_group; } -static void pci_epf_type_drop(struct config_group *group, - struct config_item *item) +static void pci_ep_cfs_add_type_group(struct pci_epf_group *epf_group) { - config_item_put(item); -} + struct config_group *group; -static struct configfs_group_operations pci_epf_type_group_ops = { - .make_group = &pci_epf_type_make, - .drop_item = &pci_epf_type_drop, -}; + group = pci_epf_type_add_cfs(epf_group->epf, &epf_group->group); + if (!group) + return; -static const struct config_item_type pci_epf_type = { - .ct_group_ops = &pci_epf_type_group_ops, - .ct_item_ops = &pci_epf_ops, - .ct_attrs = pci_epf_attrs, - .ct_owner = THIS_MODULE, -}; + if (IS_ERR(group)) { + dev_err(&epf_group->epf->dev, + "failed to create epf type specific attributes\n"); + return; + } + + configfs_register_group(&epf_group->group, group); +} static void pci_epf_cfs_work(struct work_struct *work) { @@ -547,6 +582,8 @@ static void pci_epf_cfs_work(struct work_struct *work) pr_err("failed to create 'secondary' EPC interface\n"); return; } + + pci_ep_cfs_add_type_group(epf_group); } static struct config_group *pci_epf_make(struct config_group *group, diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c index 46c9a5c3ca14..6c54fa5684d2 100644 --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c @@ -213,7 +213,7 @@ EXPORT_SYMBOL_GPL(pci_epc_start); * @func_no: the physical endpoint function number in the EPC device * @vfunc_no: the virtual endpoint function number in the physical function * @type: specify the type of interrupt; legacy, MSI or MSI-X - * @interrupt_num: the MSI or MSI-X interrupt number + * @interrupt_num: the MSI or MSI-X interrupt number with range (1-N) * * Invoke to raise an legacy, MSI or MSI-X interrupt */ @@ -246,7 +246,7 @@ EXPORT_SYMBOL_GPL(pci_epc_raise_irq); * @func_no: the physical endpoint function number in the EPC device * @vfunc_no: the virtual endpoint function number in the physical function * @phys_addr: the physical address of the outbound region - * @interrupt_num: the MSI interrupt number + * @interrupt_num: the MSI interrupt number with range (1-N) * @entry_size: Size of Outbound address region for each interrupt * @msi_data: the data that should be written in order to raise MSI interrupt * with interrupt number as 'interrupt num' @@ -707,6 +707,32 @@ void pci_epc_linkup(struct pci_epc *epc) EXPORT_SYMBOL_GPL(pci_epc_linkup); /** + * pci_epc_linkdown() - Notify the EPF device that EPC device has dropped the + * connection with the Root Complex. + * @epc: the EPC device which has dropped the link with the host + * + * Invoke to Notify the EPF device that the EPC device has dropped the + * connection with the Root Complex. + */ +void pci_epc_linkdown(struct pci_epc *epc) +{ + struct pci_epf *epf; + + if (!epc || IS_ERR(epc)) + return; + + mutex_lock(&epc->list_lock); + list_for_each_entry(epf, &epc->pci_epf, list) { + mutex_lock(&epf->lock); + if (epf->event_ops && epf->event_ops->link_down) + epf->event_ops->link_down(epf); + mutex_unlock(&epf->lock); + } + mutex_unlock(&epc->list_lock); +} +EXPORT_SYMBOL_GPL(pci_epc_linkdown); + +/** * pci_epc_init_notify() - Notify the EPF device that EPC device's core * initialization is completed. * @epc: the EPC device whose core initialization is completed @@ -733,6 +759,32 @@ void pci_epc_init_notify(struct pci_epc *epc) EXPORT_SYMBOL_GPL(pci_epc_init_notify); /** + * pci_epc_bme_notify() - Notify the EPF device that the EPC device has received + * the BME event from the Root complex + * @epc: the EPC device that received the BME event + * + * Invoke to Notify the EPF device that the EPC device has received the Bus + * Master Enable (BME) event from the Root complex + */ +void pci_epc_bme_notify(struct pci_epc *epc) +{ + struct pci_epf *epf; + + if (!epc || IS_ERR(epc)) + return; + + mutex_lock(&epc->list_lock); + list_for_each_entry(epf, &epc->pci_epf, list) { + mutex_lock(&epf->lock); + if (epf->event_ops && epf->event_ops->bme) + epf->event_ops->bme(epf); + mutex_unlock(&epf->lock); + } + mutex_unlock(&epc->list_lock); +} +EXPORT_SYMBOL_GPL(pci_epc_bme_notify); + +/** * pci_epc_destroy() - destroy the EPC device * @epc: the EPC device that has to be destroyed * diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c index 2036e38be093..2c32de667937 100644 --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c @@ -21,38 +21,6 @@ static struct bus_type pci_epf_bus_type; static const struct device_type pci_epf_type; /** - * pci_epf_type_add_cfs() - Help function drivers to expose function specific - * attributes in configfs - * @epf: the EPF device that has to be configured using configfs - * @group: the parent configfs group (corresponding to entries in - * pci_epf_device_id) - * - * Invoke to expose function specific attributes in configfs. If the function - * driver does not have anything to expose (attributes configured by user), - * return NULL. - */ -struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, - struct config_group *group) -{ - struct config_group *epf_type_group; - - if (!epf->driver) { - dev_err(&epf->dev, "epf device not bound to driver\n"); - return NULL; - } - - if (!epf->driver->ops->add_cfs) - return NULL; - - mutex_lock(&epf->lock); - epf_type_group = epf->driver->ops->add_cfs(epf, group); - mutex_unlock(&epf->lock); - - return epf_type_group; -} -EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs); - -/** * pci_epf_unbind() - Notify the function driver that the binding between the * EPF device and EPC device has been lost * @epf: the EPF device which has lost the binding with the EPC device @@ -493,16 +461,16 @@ static const struct device_type pci_epf_type = { .release = pci_epf_dev_release, }; -static int +static const struct pci_epf_device_id * pci_epf_match_id(const struct pci_epf_device_id *id, const struct pci_epf *epf) { while (id->name[0]) { if (strcmp(epf->name, id->name) == 0) - return true; + return id; id++; } - return false; + return NULL; } static int pci_epf_device_match(struct device *dev, struct device_driver *drv) @@ -511,7 +479,7 @@ static int pci_epf_device_match(struct device *dev, struct device_driver *drv) struct pci_epf_driver *driver = to_pci_epf_driver(drv); if (driver->id_table) - return pci_epf_match_id(driver->id_table, epf); + return !!pci_epf_match_id(driver->id_table, epf); return !strcmp(epf->name, drv->name); } @@ -526,7 +494,7 @@ static int pci_epf_device_probe(struct device *dev) epf->driver = driver; - return driver->probe(epf); + return driver->probe(epf, pci_epf_match_id(driver->id_table, epf)); } static void pci_epf_device_remove(struct device *dev) diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index 5b1f271c6034..328d1e416014 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -498,7 +498,6 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge) acpiphp_native_scan_bridge(dev); } } else { - LIST_HEAD(add_list); int max, pass; acpiphp_rescan_slot(slot); @@ -512,12 +511,10 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge) if (pass && dev->subordinate) { check_hotplug_bridge(slot, dev); pcibios_resource_survey_bus(dev->subordinate); - __pci_bus_size_bridges(dev->subordinate, - &add_list); } } } - __pci_bus_assign_resources(bus, &add_list, NULL); + pci_assign_unassigned_bridge_resources(bus->self); } acpiphp_sanitize_bus(bus); diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index 529c34808440..dcdbfcf404dd 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c @@ -166,11 +166,11 @@ void pciehp_handle_button_press(struct controller *ctrl) case ON_STATE: if (ctrl->state == ON_STATE) { ctrl->state = BLINKINGOFF_STATE; - ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", + ctrl_info(ctrl, "Slot(%s): Button press: will power off in 5 sec\n", slot_name(ctrl)); } else { ctrl->state = BLINKINGON_STATE; - ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", + ctrl_info(ctrl, "Slot(%s): Button press: will power on in 5 sec\n", slot_name(ctrl)); } /* blink power indicator and turn off attention */ @@ -185,22 +185,23 @@ void pciehp_handle_button_press(struct controller *ctrl) * press the attention again before the 5 sec. limit * expires to cancel hot-add or hot-remove */ - ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl)); cancel_delayed_work(&ctrl->button_work); if (ctrl->state == BLINKINGOFF_STATE) { ctrl->state = ON_STATE; pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_ATTN_IND_OFF); + ctrl_info(ctrl, "Slot(%s): Button press: canceling request to power off\n", + slot_name(ctrl)); } else { ctrl->state = OFF_STATE; pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_ATTN_IND_OFF); + ctrl_info(ctrl, "Slot(%s): Button press: canceling request to power on\n", + slot_name(ctrl)); } - ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", - slot_name(ctrl)); break; default: - ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", + ctrl_err(ctrl, "Slot(%s): Button press: ignoring invalid state %#x\n", slot_name(ctrl), ctrl->state); break; } @@ -256,6 +257,14 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events) present = pciehp_card_present(ctrl); link_active = pciehp_check_link_active(ctrl); if (present <= 0 && link_active <= 0) { + if (ctrl->state == BLINKINGON_STATE) { + ctrl->state = OFF_STATE; + cancel_delayed_work(&ctrl->button_work); + pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, + INDICATOR_NOOP); + ctrl_info(ctrl, "Slot(%s): Card not present\n", + slot_name(ctrl)); + } mutex_unlock(&ctrl->state_lock); return; } diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index f8c70115b691..8711325605f0 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -722,11 +722,8 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) } /* Check Attention Button Pressed */ - if (events & PCI_EXP_SLTSTA_ABP) { - ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", - slot_name(ctrl)); + if (events & PCI_EXP_SLTSTA_ABP) pciehp_handle_button_press(ctrl); - } /* Check Power Fault Detected */ if (events & PCI_EXP_SLTSTA_PFD) { @@ -984,7 +981,7 @@ static inline int pcie_hotplug_depth(struct pci_dev *dev) struct controller *pcie_init(struct pcie_device *dev) { struct controller *ctrl; - u32 slot_cap, slot_cap2, link_cap; + u32 slot_cap, slot_cap2; u8 poweron; struct pci_dev *pdev = dev->port; struct pci_bus *subordinate = pdev->subordinate; @@ -1030,9 +1027,6 @@ struct controller *pcie_init(struct pcie_device *dev) if (dmi_first_match(inband_presence_disabled_dmi_table)) ctrl->inband_presence_disabled = 1; - /* Check if Data Link Layer Link Active Reporting is implemented */ - pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); - /* Clear all remaining event bits in Slot Status register. */ pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | @@ -1051,7 +1045,7 @@ struct controller *pcie_init(struct pcie_device *dev) FLAG(slot_cap, PCI_EXP_SLTCAP_EIP), FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS), FLAG(slot_cap2, PCI_EXP_SLTCAP2_IBPD), - FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), + FLAG(pdev->link_active_reporting, true), pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); /* diff --git a/drivers/pci/of.c b/drivers/pci/of.c index 2c25f4fa0225..e51219f9f523 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c @@ -39,16 +39,14 @@ int pci_set_of_node(struct pci_dev *dev) return -ENODEV; } - dev->dev.of_node = node; - dev->dev.fwnode = &node->fwnode; + device_set_node(&dev->dev, of_fwnode_handle(node)); return 0; } void pci_release_of_node(struct pci_dev *dev) { of_node_put(dev->dev.of_node); - dev->dev.of_node = NULL; - dev->dev.fwnode = NULL; + device_set_node(&dev->dev, NULL); } void pci_set_bus_of_node(struct pci_bus *bus) @@ -63,17 +61,13 @@ void pci_set_bus_of_node(struct pci_bus *bus) bus->self->external_facing = true; } - bus->dev.of_node = node; - - if (bus->dev.of_node) - bus->dev.fwnode = &bus->dev.of_node->fwnode; + device_set_node(&bus->dev, of_fwnode_handle(node)); } void pci_release_bus_of_node(struct pci_bus *bus) { of_node_put(bus->dev.of_node); - bus->dev.of_node = NULL; - bus->dev.fwnode = NULL; + device_set_node(&bus->dev, NULL); } struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus) diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index 052a611081ec..a05350a4e49c 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c @@ -1043,6 +1043,16 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev) return false; } +static void acpi_pci_config_space_access(struct pci_dev *dev, bool enable) +{ + int val = enable ? ACPI_REG_CONNECT : ACPI_REG_DISCONNECT; + int ret = acpi_evaluate_reg(ACPI_HANDLE(&dev->dev), + ACPI_ADR_SPACE_PCI_CONFIG, val); + if (ret) + pci_dbg(dev, "ACPI _REG %s evaluation failed (%d)\n", + enable ? "connect" : "disconnect", ret); +} + int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state) { struct acpi_device *adev = ACPI_COMPANION(&dev->dev); @@ -1053,32 +1063,49 @@ int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state) [PCI_D3hot] = ACPI_STATE_D3_HOT, [PCI_D3cold] = ACPI_STATE_D3_COLD, }; - int error = -EINVAL; + int error; /* If the ACPI device has _EJ0, ignore the device */ if (!adev || acpi_has_method(adev->handle, "_EJ0")) return -ENODEV; switch (state) { - case PCI_D3cold: - if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) == - PM_QOS_FLAGS_ALL) { - error = -EBUSY; - break; - } - fallthrough; case PCI_D0: case PCI_D1: case PCI_D2: case PCI_D3hot: - error = acpi_device_set_power(adev, state_conv[state]); + case PCI_D3cold: + break; + default: + return -EINVAL; + } + + if (state == PCI_D3cold) { + if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) == + PM_QOS_FLAGS_ALL) + return -EBUSY; + + /* Notify AML lack of PCI config space availability */ + acpi_pci_config_space_access(dev, false); } - if (!error) - pci_dbg(dev, "power state changed by ACPI to %s\n", - acpi_power_state_string(adev->power.state)); + error = acpi_device_set_power(adev, state_conv[state]); + if (error) + return error; - return error; + pci_dbg(dev, "power state changed by ACPI to %s\n", + acpi_power_state_string(adev->power.state)); + + /* + * Notify AML of PCI config space availability. Config space is + * accessible in all states except D3cold; the only transitions + * that change availability are transitions to D3cold and from + * D3cold to D0. + */ + if (state == PCI_D0) + acpi_pci_config_space_access(dev, true); + + return 0; } pci_power_t acpi_pci_get_power_state(struct pci_dev *dev) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 5ede93222bc1..60230da957e0 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -65,6 +65,13 @@ struct pci_pme_device { #define PME_TIMEOUT 1000 /* How long between PME checks */ /* + * Following exit from Conventional Reset, devices must be ready within 1 sec + * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional + * Reset (PCIe r6.0 sec 5.8). + */ +#define PCI_RESET_WAIT 1000 /* msec */ + +/* * Devices may extend the 1 sec period through Request Retry Status * completions (PCIe r6.0 sec 2.3.1). The spec does not provide an upper * limit, but 60 sec ought to be enough for any device to become @@ -1156,7 +1163,14 @@ void pci_resume_bus(struct pci_bus *bus) static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) { int delay = 1; - u32 id; + bool retrain = false; + struct pci_dev *bridge; + + if (pci_is_pcie(dev)) { + bridge = pci_upstream_bridge(dev); + if (bridge) + retrain = true; + } /* * After reset, the device should not silently discard config @@ -1170,21 +1184,33 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) * Command register instead of Vendor ID so we don't have to * contend with the CRS SV value. */ - pci_read_config_dword(dev, PCI_COMMAND, &id); - while (PCI_POSSIBLE_ERROR(id)) { + for (;;) { + u32 id; + + pci_read_config_dword(dev, PCI_COMMAND, &id); + if (!PCI_POSSIBLE_ERROR(id)) + break; + if (delay > timeout) { pci_warn(dev, "not ready %dms after %s; giving up\n", delay - 1, reset_type); return -ENOTTY; } - if (delay > PCI_RESET_WAIT) + if (delay > PCI_RESET_WAIT) { + if (retrain) { + retrain = false; + if (pcie_failed_link_retrain(bridge)) { + delay = 1; + continue; + } + } pci_info(dev, "not ready %dms after %s; waiting\n", delay - 1, reset_type); + } msleep(delay); delay *= 2; - pci_read_config_dword(dev, PCI_COMMAND, &id); } if (delay > PCI_RESET_WAIT) @@ -2949,13 +2975,13 @@ static const struct dmi_system_id bridge_d3_blacklist[] = { { /* * Downstream device is not accessible after putting a root port - * into D3cold and back into D0 on Elo i2. + * into D3cold and back into D0 on Elo Continental Z2 board */ - .ident = "Elo i2", + .ident = "Elo Continental Z2", .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"), - DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"), - DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"), + DMI_MATCH(DMI_BOARD_VENDOR, "Elo Touch Solutions"), + DMI_MATCH(DMI_BOARD_NAME, "Geminilake"), + DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"), }, }, #endif @@ -4857,6 +4883,79 @@ static int pci_pm_reset(struct pci_dev *dev, bool probe) } /** + * pcie_wait_for_link_status - Wait for link status change + * @pdev: Device whose link to wait for. + * @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE. + * @active: Waiting for active or inactive? + * + * Return 0 if successful, or -ETIMEDOUT if status has not changed within + * PCIE_LINK_RETRAIN_TIMEOUT_MS milliseconds. + */ +static int pcie_wait_for_link_status(struct pci_dev *pdev, + bool use_lt, bool active) +{ + u16 lnksta_mask, lnksta_match; + unsigned long end_jiffies; + u16 lnksta; + + lnksta_mask = use_lt ? PCI_EXP_LNKSTA_LT : PCI_EXP_LNKSTA_DLLLA; + lnksta_match = active ? lnksta_mask : 0; + + end_jiffies = jiffies + msecs_to_jiffies(PCIE_LINK_RETRAIN_TIMEOUT_MS); + do { + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta); + if ((lnksta & lnksta_mask) == lnksta_match) + return 0; + msleep(1); + } while (time_before(jiffies, end_jiffies)); + + return -ETIMEDOUT; +} + +/** + * pcie_retrain_link - Request a link retrain and wait for it to complete + * @pdev: Device whose link to retrain. + * @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE, for status. + * + * Retrain completion status is retrieved from the Link Status Register + * according to @use_lt. It is not verified whether the use of the DLLLA + * bit is valid. + * + * Return 0 if successful, or -ETIMEDOUT if training has not completed + * within PCIE_LINK_RETRAIN_TIMEOUT_MS milliseconds. + */ +int pcie_retrain_link(struct pci_dev *pdev, bool use_lt) +{ + int rc; + u16 lnkctl; + + /* + * Ensure the updated LNKCTL parameters are used during link + * training by checking that there is no ongoing link training to + * avoid LTSSM race as recommended in Implementation Note at the + * end of PCIe r6.0.1 sec 7.5.3.7. + */ + rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt); + if (rc) + return rc; + + pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnkctl); + lnkctl |= PCI_EXP_LNKCTL_RL; + pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl); + if (pdev->clear_retrain_link) { + /* + * Due to an erratum in some devices the Retrain Link bit + * needs to be cleared again manually to allow the link + * training to succeed. + */ + lnkctl &= ~PCI_EXP_LNKCTL_RL; + pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl); + } + + return pcie_wait_for_link_status(pdev, use_lt, !use_lt); +} + +/** * pcie_wait_for_link_delay - Wait until link is active or inactive * @pdev: Bridge device * @active: waiting for active or inactive? @@ -4867,16 +4966,14 @@ static int pci_pm_reset(struct pci_dev *dev, bool probe) static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, int delay) { - int timeout = 1000; - bool ret; - u16 lnk_status; + int rc; /* * Some controllers might not implement link active reporting. In this * case, we wait for 1000 ms + any delay requested by the caller. */ if (!pdev->link_active_reporting) { - msleep(timeout + delay); + msleep(PCIE_LINK_RETRAIN_TIMEOUT_MS + delay); return true; } @@ -4891,20 +4988,21 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, */ if (active) msleep(20); - for (;;) { - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); - ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); - if (ret == active) - break; - if (timeout <= 0) - break; - msleep(10); - timeout -= 10; - } - if (active && ret) + rc = pcie_wait_for_link_status(pdev, false, active); + if (active) { + if (rc) + rc = pcie_failed_link_retrain(pdev); + if (rc) + return false; + msleep(delay); + return true; + } + + if (rc) + return false; - return ret == active; + return true; } /** @@ -5011,11 +5109,9 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type) * * However, 100 ms is the minimum and the PCIe spec says the * software must allow at least 1s before it can determine that the - * device that did not respond is a broken device. There is - * evidence that 100 ms is not always enough, for example certain - * Titan Ridge xHCI controller does not always respond to - * configuration requests if we only wait for 100 ms (see - * https://bugzilla.kernel.org/show_bug.cgi?id=203885). + * device that did not respond is a broken device. Also device can + * take longer than that to respond if it indicates so through Request + * Retry Status completions. * * Therefore we wait for 100 ms and check for the device presence * until the timeout expires. @@ -5024,16 +5120,36 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type) return 0; if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { + u16 status; + pci_dbg(dev, "waiting %d ms for downstream link\n", delay); msleep(delay); - } else { - pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", - delay); - if (!pcie_wait_for_link_delay(dev, true, delay)) { - /* Did not train, no need to wait any further */ - pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); + + if (!pci_dev_wait(child, reset_type, PCI_RESET_WAIT - delay)) + return 0; + + /* + * If the port supports active link reporting we now check + * whether the link is active and if not bail out early with + * the assumption that the device is not present anymore. + */ + if (!dev->link_active_reporting) return -ENOTTY; - } + + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &status); + if (!(status & PCI_EXP_LNKSTA_DLLLA)) + return -ENOTTY; + + return pci_dev_wait(child, reset_type, + PCIE_RESET_READY_POLL_MS - PCI_RESET_WAIT); + } + + pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", + delay); + if (!pcie_wait_for_link_delay(dev, true, delay)) { + /* Did not train, no need to wait any further */ + pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); + return -ENOTTY; } return pci_dev_wait(child, reset_type, diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 2475098f6518..a4c397434057 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -11,6 +11,8 @@ #define PCI_VSEC_ID_INTEL_TBT 0x1234 /* Thunderbolt */ +#define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000 + extern const unsigned char pcie_link_speed[]; extern bool pci_early_dump; @@ -64,13 +66,6 @@ struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev, #define PCI_PM_D3HOT_WAIT 10 /* msec */ #define PCI_PM_D3COLD_WAIT 100 /* msec */ -/* - * Following exit from Conventional Reset, devices must be ready within 1 sec - * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional - * Reset (PCIe r6.0 sec 5.8). - */ -#define PCI_RESET_WAIT 1000 /* msec */ - void pci_update_current_state(struct pci_dev *dev, pci_power_t state); void pci_refresh_power_state(struct pci_dev *dev); int pci_power_up(struct pci_dev *dev); @@ -541,6 +536,7 @@ void pci_acs_init(struct pci_dev *dev); int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); int pci_dev_specific_enable_acs(struct pci_dev *dev); int pci_dev_specific_disable_acs_redir(struct pci_dev *dev); +bool pcie_failed_link_retrain(struct pci_dev *dev); #else static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags) @@ -555,6 +551,10 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev) { return -ENOTTY; } +static inline bool pcie_failed_link_retrain(struct pci_dev *dev) +{ + return false; +} #endif /* PCI error reporting and recovery */ @@ -563,6 +563,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev, pci_ers_result_t (*reset_subordinates)(struct pci_dev *pdev)); bool pcie_wait_for_link(struct pci_dev *pdev, bool active); +int pcie_retrain_link(struct pci_dev *pdev, bool use_lt); #ifdef CONFIG_PCIEASPM void pcie_aspm_init_link_state(struct pci_dev *pdev); void pcie_aspm_exit_link_state(struct pci_dev *pdev); @@ -686,6 +687,8 @@ extern const struct attribute_group aer_stats_attr_group; void pci_aer_clear_fatal_status(struct pci_dev *dev); int pci_aer_clear_status(struct pci_dev *dev); int pci_aer_raw_clear_status(struct pci_dev *dev); +void pci_save_aer_state(struct pci_dev *dev); +void pci_restore_aer_state(struct pci_dev *dev); #else static inline void pci_no_aer(void) { } static inline void pci_aer_init(struct pci_dev *d) { } @@ -693,6 +696,8 @@ static inline void pci_aer_exit(struct pci_dev *d) { } static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } static inline int pci_aer_clear_status(struct pci_dev *dev) { return -EINVAL; } static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL; } +static inline void pci_save_aer_state(struct pci_dev *dev) { } +static inline void pci_restore_aer_state(struct pci_dev *dev) { } #endif #ifdef CONFIG_ACPI diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 66d7514ca111..3dafba0b5f41 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -90,8 +90,6 @@ static const char *policy_str[] = { [POLICY_POWER_SUPERSAVE] = "powersupersave" }; -#define LINK_RETRAIN_TIMEOUT HZ - /* * The L1 PM substate capability is only implemented in function 0 in a * multi function device. @@ -193,36 +191,6 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) link->clkpm_disable = blacklist ? 1 : 0; } -static bool pcie_retrain_link(struct pcie_link_state *link) -{ - struct pci_dev *parent = link->pdev; - unsigned long end_jiffies; - u16 reg16; - - pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16); - reg16 |= PCI_EXP_LNKCTL_RL; - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); - if (parent->clear_retrain_link) { - /* - * Due to an erratum in some devices the Retrain Link bit - * needs to be cleared again manually to allow the link - * training to succeed. - */ - reg16 &= ~PCI_EXP_LNKCTL_RL; - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); - } - - /* Wait for link training end. Break out after waiting for timeout */ - end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; - do { - pcie_capability_read_word(parent, PCI_EXP_LNKSTA, ®16); - if (!(reg16 & PCI_EXP_LNKSTA_LT)) - break; - msleep(1); - } while (time_before(jiffies, end_jiffies)); - return !(reg16 & PCI_EXP_LNKSTA_LT); -} - /* * pcie_aspm_configure_common_clock: check if the 2 ends of a link * could use common clock. If they are, configure them to use the @@ -289,15 +257,15 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) reg16 &= ~PCI_EXP_LNKCTL_CCC; pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); - if (pcie_retrain_link(link)) - return; + if (pcie_retrain_link(link->pdev, true)) { - /* Training failed. Restore common clock configurations */ - pci_err(parent, "ASPM: Could not configure common clock\n"); - list_for_each_entry(child, &linkbus->devices, bus_list) - pcie_capability_write_word(child, PCI_EXP_LNKCTL, + /* Training failed. Restore common clock configurations */ + pci_err(parent, "ASPM: Could not configure common clock\n"); + list_for_each_entry(child, &linkbus->devices, bus_list) + pcie_capability_write_word(child, PCI_EXP_LNKCTL, child_reg[PCI_FUNC(child->devfn)]); - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); + pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); + } } /* Convert L0s latency encoding to ns */ @@ -337,7 +305,7 @@ static u32 calc_l1_acceptable(u32 encoding) } /* Convert L1SS T_pwr encoding to usec */ -static u32 calc_l1ss_pwron(struct pci_dev *pdev, u32 scale, u32 val) +static u32 calc_l12_pwron(struct pci_dev *pdev, u32 scale, u32 val) { switch (scale) { case 0: @@ -471,7 +439,7 @@ static void pci_clear_and_set_dword(struct pci_dev *pdev, int pos, } /* Calculate L1.2 PM substate timing parameters */ -static void aspm_calc_l1ss_info(struct pcie_link_state *link, +static void aspm_calc_l12_info(struct pcie_link_state *link, u32 parent_l1ss_cap, u32 child_l1ss_cap) { struct pci_dev *child = link->downstream, *parent = link->pdev; @@ -481,9 +449,6 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link, u32 pctl1, pctl2, cctl1, cctl2; u32 pl1_2_enables, cl1_2_enables; - if (!(link->aspm_support & ASPM_STATE_L1_2_MASK)) - return; - /* Choose the greater of the two Port Common_Mode_Restore_Times */ val1 = (parent_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; val2 = (child_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; @@ -495,13 +460,13 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link, val2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; scale2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; - if (calc_l1ss_pwron(parent, scale1, val1) > - calc_l1ss_pwron(child, scale2, val2)) { + if (calc_l12_pwron(parent, scale1, val1) > + calc_l12_pwron(child, scale2, val2)) { ctl2 |= scale1 | (val1 << 3); - t_power_on = calc_l1ss_pwron(parent, scale1, val1); + t_power_on = calc_l12_pwron(parent, scale1, val1); } else { ctl2 |= scale2 | (val2 << 3); - t_power_on = calc_l1ss_pwron(child, scale2, val2); + t_power_on = calc_l12_pwron(child, scale2, val2); } /* @@ -616,8 +581,8 @@ static void aspm_l1ss_init(struct pcie_link_state *link) if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2) link->aspm_enabled |= ASPM_STATE_L1_2_PCIPM; - if (link->aspm_support & ASPM_STATE_L1SS) - aspm_calc_l1ss_info(link, parent_l1ss_cap, child_l1ss_cap); + if (link->aspm_support & ASPM_STATE_L1_2_MASK) + aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); } static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) @@ -1010,21 +975,24 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev) down_read(&pci_bus_sem); mutex_lock(&aspm_lock); - /* - * All PCIe functions are in one slot, remove one function will remove - * the whole slot, so just wait until we are the last function left. - */ - if (!list_empty(&parent->subordinate->devices)) - goto out; link = parent->link_state; root = link->root; parent_link = link->parent; - /* All functions are removed, so just disable ASPM for the link */ + /* + * link->downstream is a pointer to the pci_dev of function 0. If + * we remove that function, the pci_dev is about to be deallocated, + * so we can't use link->downstream again. Free the link state to + * avoid this. + * + * If we're removing a non-0 function, it's possible we could + * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends + * programming the same ASPM Control value for all functions of + * multi-function devices, so disable ASPM for all of them. + */ pcie_config_aspm_link(link, 0); list_del(&link->sibling); - /* Clock PM is for endpoint device */ free_link_state(link); /* Recheck latencies and configure upstream links */ @@ -1032,7 +1000,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev) pcie_update_aspm_capable(root); pcie_config_aspm_path(parent_link); } -out: + mutex_unlock(&aspm_lock); up_read(&pci_bus_sem); } @@ -1095,8 +1063,7 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) if (state & PCIE_LINK_STATE_L0S) link->aspm_disable |= ASPM_STATE_L0S; if (state & PCIE_LINK_STATE_L1) - /* L1 PM substates require L1 */ - link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS; + link->aspm_disable |= ASPM_STATE_L1; if (state & PCIE_LINK_STATE_L1_1) link->aspm_disable |= ASPM_STATE_L1_1; if (state & PCIE_LINK_STATE_L1_2) @@ -1171,16 +1138,16 @@ int pci_enable_link_state(struct pci_dev *pdev, int state) if (state & PCIE_LINK_STATE_L0S) link->aspm_default |= ASPM_STATE_L0S; if (state & PCIE_LINK_STATE_L1) - /* L1 PM substates require L1 */ - link->aspm_default |= ASPM_STATE_L1 | ASPM_STATE_L1SS; + link->aspm_default |= ASPM_STATE_L1; + /* L1 PM substates require L1 */ if (state & PCIE_LINK_STATE_L1_1) - link->aspm_default |= ASPM_STATE_L1_1; + link->aspm_default |= ASPM_STATE_L1_1 | ASPM_STATE_L1; if (state & PCIE_LINK_STATE_L1_2) - link->aspm_default |= ASPM_STATE_L1_2; + link->aspm_default |= ASPM_STATE_L1_2 | ASPM_STATE_L1; if (state & PCIE_LINK_STATE_L1_1_PCIPM) - link->aspm_default |= ASPM_STATE_L1_1_PCIPM; + link->aspm_default |= ASPM_STATE_L1_1_PCIPM | ASPM_STATE_L1; if (state & PCIE_LINK_STATE_L1_2_PCIPM) - link->aspm_default |= ASPM_STATE_L1_2_PCIPM; + link->aspm_default |= ASPM_STATE_L1_2_PCIPM | ASPM_STATE_L1; pcie_config_aspm_link(link, policy_to_aspm_state(link)); link->clkpm_default = (state & PCIE_LINK_STATE_CLKPM) ? 1 : 0; diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 0b2826c4a832..8bac3ce02609 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -820,7 +820,6 @@ static void pci_set_bus_speed(struct pci_bus *bus) pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap); bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS]; - bridge->link_active_reporting = !!(linkcap & PCI_EXP_LNKCAP_DLLLARC); pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta); pcie_update_link_speed(bus, linksta); @@ -997,8 +996,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) resource_list_for_each_entry_safe(window, n, &resources) { offset = window->offset; res = window->res; - if (!res->flags && !res->start && !res->end) + if (!res->flags && !res->start && !res->end) { + release_resource(res); continue; + } list_move_tail(&window->node, &bridge->windows); @@ -1527,6 +1528,7 @@ void set_pcie_port_type(struct pci_dev *pdev) { int pos; u16 reg16; + u32 reg32; int type; struct pci_dev *parent; @@ -1540,6 +1542,10 @@ void set_pcie_port_type(struct pci_dev *pdev) pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap); pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap); + pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, ®32); + if (reg32 & PCI_EXP_LNKCAP_DLLLARC) + pdev->link_active_reporting = 1; + parent = pci_upstream_bridge(pdev); if (!parent) return; @@ -2546,6 +2552,8 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus) dma_set_max_seg_size(&dev->dev, 65536); dma_set_seg_boundary(&dev->dev, 0xffffffff); + pcie_failed_link_retrain(dev); + /* Fix up broken headers */ pci_fixup_device(pci_fixup_header, dev); diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index c525867760bf..321156ca273d 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -33,6 +33,99 @@ #include <linux/switchtec.h> #include "pci.h" +/* + * Retrain the link of a downstream PCIe port by hand if necessary. + * + * This is needed at least where a downstream port of the ASMedia ASM2824 + * Gen 3 switch is wired to the upstream port of the Pericom PI7C9X2G304 + * Gen 2 switch, and observed with the Delock Riser Card PCI Express x1 > + * 2 x PCIe x1 device, P/N 41433, plugged into the SiFive HiFive Unmatched + * board. + * + * In such a configuration the switches are supposed to negotiate the link + * speed of preferably 5.0GT/s, falling back to 2.5GT/s. However the link + * continues switching between the two speeds indefinitely and the data + * link layer never reaches the active state, with link training reported + * repeatedly active ~84% of the time. Forcing the target link speed to + * 2.5GT/s with the upstream ASM2824 device makes the two switches talk to + * each other correctly however. And more interestingly retraining with a + * higher target link speed afterwards lets the two successfully negotiate + * 5.0GT/s. + * + * With the ASM2824 we can rely on the otherwise optional Data Link Layer + * Link Active status bit and in the failed link training scenario it will + * be off along with the Link Bandwidth Management Status indicating that + * hardware has changed the link speed or width in an attempt to correct + * unreliable link operation. For a port that has been left unconnected + * both bits will be clear. So use this information to detect the problem + * rather than polling the Link Training bit and watching out for flips or + * at least the active status. + * + * Since the exact nature of the problem isn't known and in principle this + * could trigger where an ASM2824 device is downstream rather upstream, + * apply this erratum workaround to any downstream ports as long as they + * support Link Active reporting and have the Link Control 2 register. + * Restrict the speed to 2.5GT/s then with the Target Link Speed field, + * request a retrain and wait 200ms for the data link to go up. + * + * If this turns out successful and we know by the Vendor:Device ID it is + * safe to do so, then lift the restriction, letting the devices negotiate + * a higher speed. Also check for a similar 2.5GT/s speed restriction the + * firmware may have already arranged and lift it with ports that already + * report their data link being up. + * + * Return TRUE if the link has been successfully retrained, otherwise FALSE. + */ +bool pcie_failed_link_retrain(struct pci_dev *dev) +{ + static const struct pci_device_id ids[] = { + { PCI_VDEVICE(ASMEDIA, 0x2824) }, /* ASMedia ASM2824 */ + {} + }; + u16 lnksta, lnkctl2; + + if (!pci_is_pcie(dev) || !pcie_downstream_port(dev) || + !pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting) + return false; + + pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2); + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); + if ((lnksta & (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_DLLLA)) == + PCI_EXP_LNKSTA_LBMS) { + pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n"); + + lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS; + lnkctl2 |= PCI_EXP_LNKCTL2_TLS_2_5GT; + pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2); + + if (pcie_retrain_link(dev, false)) { + pci_info(dev, "retraining failed\n"); + return false; + } + + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); + } + + if ((lnksta & PCI_EXP_LNKSTA_DLLLA) && + (lnkctl2 & PCI_EXP_LNKCTL2_TLS) == PCI_EXP_LNKCTL2_TLS_2_5GT && + pci_match_id(ids, dev)) { + u32 lnkcap; + + pci_info(dev, "removing 2.5GT/s downstream link speed restriction\n"); + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); + lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS; + lnkctl2 |= lnkcap & PCI_EXP_LNKCAP_SLS; + pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2); + + if (pcie_retrain_link(dev, false)) { + pci_info(dev, "retraining failed\n"); + return false; + } + } + + return true; +} + static ktime_t fixup_debug_start(struct pci_dev *dev, void (*fn)(struct pci_dev *dev)) { @@ -2420,9 +2513,9 @@ static void quirk_enable_clear_retrain_link(struct pci_dev *dev) dev->clear_retrain_link = 1; pci_info(dev, "Enable PCIe Retrain Link quirk\n"); } -DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe110, quirk_enable_clear_retrain_link); -DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe111, quirk_enable_clear_retrain_link); -DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe130, quirk_enable_clear_retrain_link); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe110, quirk_enable_clear_retrain_link); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe111, quirk_enable_clear_retrain_link); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe130, quirk_enable_clear_retrain_link); static void fixup_rev1_53c810(struct pci_dev *dev) { @@ -3993,10 +4086,11 @@ static int nvme_disable_and_flr(struct pci_dev *dev, bool probe) } /* - * Intel DC P3700 NVMe controller will timeout waiting for ready status - * to change after NVMe enable if the driver starts interacting with the - * device too soon after FLR. A 250ms delay after FLR has heuristically - * proven to produce reliably working results for device assignment cases. + * Some NVMe controllers such as Intel DC P3700 and Solidigm P44 Pro will + * timeout waiting for ready status to change after NVMe enable if the driver + * starts interacting with the device too soon after FLR. A 250ms delay after + * FLR has heuristically proven to produce reliably working results for device + * assignment cases. */ static int delay_250ms_after_flr(struct pci_dev *dev, bool probe) { @@ -4083,6 +4177,7 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = { { PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr }, { PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr }, { PCI_VENDOR_ID_INTEL, 0x0a54, delay_250ms_after_flr }, + { PCI_VENDOR_ID_SOLIDIGM, 0xf1ac, delay_250ms_after_flr }, { PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, reset_chelsio_generic_dev }, { PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HINIC_VF, @@ -4174,6 +4269,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220, /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230, quirk_dma_func1_alias); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9235, + quirk_dma_func1_alias); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642, quirk_dma_func1_alias); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0645, |