diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2021-11-07 00:36:12 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-11-07 00:36:12 +0300 |
commit | 0c5c62ddf88c34bc83b66e4ac9beb2bb0e1887d4 (patch) | |
tree | f1043e124056c7f2a061f2d2e5240aa687534633 /drivers | |
parent | 512b7931ad0561ffe14265f9ff554a3c081b476b (diff) | |
parent | dda4b381f05d447a0ae31e2e44aeb35d313a311f (diff) | |
download | linux-0c5c62ddf88c34bc83b66e4ac9beb2bb0e1887d4.tar.xz |
Merge tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull pci updates from Bjorn Helgaas:
"Enumeration:
- Conserve IRQs by setting up portdrv IRQs only when there are users
(Jan Kiszka)
- Rework and simplify _OSC negotiation for control of PCIe features
(Joerg Roedel)
- Remove struct pci_dev.driver pointer since it's redundant with the
struct device.driver pointer (Uwe Kleine-König)
Resource management:
- Coalesce contiguous host bridge apertures from _CRS to accommodate
BARs that cover more than one aperture (Kai-Heng Feng)
Sysfs:
- Check CAP_SYS_ADMIN before parsing user input (Krzysztof
Wilczyński)
- Return -EINVAL consistently from "store" functions (Krzysztof
Wilczyński)
- Use sysfs_emit() in endpoint "show" functions to avoid buffer
overruns (Kunihiko Hayashi)
PCIe native device hotplug:
- Ignore Link Down/Up caused by resets during error recovery so
endpoint drivers can remain bound to the device (Lukas Wunner)
Virtualization:
- Avoid bus resets on Atheros QCA6174, where they hang the device
(Ingmar Klein)
- Work around Pericom PI7C9X2G switch packet drop erratum by using
store and forward mode instead of cut-through (Nathan Rossi)
- Avoid trying to enable AtomicOps on VFs; the PF setting applies to
all VFs (Selvin Xavier)
MSI:
- Document that /sys/bus/pci/devices/.../irq contains the legacy INTx
interrupt or the IRQ of the first MSI (not MSI-X) vector (Barry
Song)
VPD:
- Add pci_read_vpd_any() and pci_write_vpd_any() to access anywhere
in the possible VPD space; use these to simplify the cxgb3 driver
(Heiner Kallweit)
Peer-to-peer DMA:
- Add (not subtract) the bus offset when calculating DMA address
(Wang Lu)
ASPM:
- Re-enable LTR at Downstream Ports so they don't report Unsupported
Requests when reset or hot-added devices send LTR messages
(Mingchuang Qiao)
Apple PCIe controller driver:
- Add driver for Apple M1 PCIe controller (Alyssa Rosenzweig, Marc
Zyngier)
Cadence PCIe controller driver:
- Return success when probe succeeds instead of falling into error
path (Li Chen)
HiSilicon Kirin PCIe controller driver:
- Reorganize PHY logic and add support for external PHY drivers
(Mauro Carvalho Chehab)
- Support PERST# GPIOs for HiKey970 external PEX 8606 bridge (Mauro
Carvalho Chehab)
- Add Kirin 970 support (Mauro Carvalho Chehab)
- Make driver removable (Mauro Carvalho Chehab)
Intel VMD host bridge driver:
- If IOMMU supports interrupt remapping, leave VMD MSI-X remapping
enabled (Adrian Huang)
- Number each controller so we can tell them apart in
/proc/interrupts (Chunguang Xu)
- Avoid building on UML because VMD depends on x86 bare metal APIs
(Johannes Berg)
Marvell Aardvark PCIe controller driver:
- Define macros for PCI_EXP_DEVCTL_PAYLOAD_* (Pali Rohár)
- Set Max Payload Size to 512 bytes per Marvell spec (Pali Rohár)
- Downgrade PIO Response Status messages to debug level (Marek Behún)
- Preserve CRS SV (Config Request Retry Software Visibility) bit in
emulated Root Control register (Pali Rohár)
- Fix issue in configuring reference clock (Pali Rohár)
- Don't clear status bits for masked interrupts (Pali Rohár)
- Don't mask unused interrupts (Pali Rohár)
- Avoid code repetition in advk_pcie_rd_conf() (Marek Behún)
- Retry config accesses on CRS response (Pali Rohár)
- Simplify emulated Root Capabilities initialization (Pali Rohár)
- Fix several link training issues (Pali Rohár)
- Fix link-up checking via LTSSM (Pali Rohár)
- Fix reporting of Data Link Layer Link Active (Pali Rohár)
- Fix emulation of W1C bits (Marek Behún)
- Fix MSI domain .alloc() method to return zero on success (Marek
Behún)
- Read entire 16-bit MSI vector in MSI handler, not just low 8 bits
(Marek Behún)
- Clear Root Port I/O Space, Memory Space, and Bus Master Enable bits
at startup; PCI core will set those as necessary (Pali Rohár)
- When operating as a Root Port, set class code to "PCI Bridge"
instead of the default "Mass Storage Controller" (Pali Rohár)
- Add emulation for PCI_BRIDGE_CTL_BUS_RESET since aardvark doesn't
implement this per spec (Pali Rohár)
- Add emulation of option ROM BAR since aardvark doesn't implement
this per spec (Pali Rohár)
MediaTek MT7621 PCIe controller driver:
- Add MediaTek MT7621 PCIe host controller driver and DT binding
(Sergio Paracuellos)
Qualcomm PCIe controller driver:
- Add SC8180x compatible string (Bjorn Andersson)
- Add endpoint controller driver and DT binding (Manivannan
Sadhasivam)
- Restructure to use of_device_get_match_data() (Prasad Malisetty)
- Add SC7280-specific pcie_1_pipe_clk_src handling (Prasad Malisetty)
Renesas R-Car PCIe controller driver:
- Remove unnecessary includes (Geert Uytterhoeven)
Rockchip DesignWare PCIe controller driver:
- Add DT binding (Simon Xue)
Socionext UniPhier Pro5 controller driver:
- Serialize INTx masking/unmasking (Kunihiko Hayashi)
Synopsys DesignWare PCIe controller driver:
- Run dwc .host_init() method before registering MSI interrupt
handler so we can deal with pending interrupts left by bootloader
(Bjorn Andersson)
- Clean up Kconfig dependencies (Andy Shevchenko)
- Export symbols to allow more modular drivers (Luca Ceresoli)
TI DRA7xx PCIe controller driver:
- Allow host and endpoint drivers to be modules (Luca Ceresoli)
- Enable external clock if present (Luca Ceresoli)
TI J721E PCIe driver:
- Disable PHY when probe fails after initializing it (Christophe
JAILLET)
MicroSemi Switchtec management driver:
- Return error to application when command execution fails because an
out-of-band reset has cleared the device BARs, Memory Space Enable,
etc (Kelvin Cao)
- Fix MRPC error status handling issue (Kelvin Cao)
- Mask out other bits when reading of management VEP instance ID
(Kelvin Cao)
- Return EOPNOTSUPP instead of ENOTSUPP from sysfs show functions
(Kelvin Cao)
- Add check of event support (Logan Gunthorpe)
Miscellaneous:
- Remove unused pci_pool wrappers, which have been replaced by
dma_pool (Cai Huoqing)
- Use 'unsigned int' instead of bare 'unsigned' (Krzysztof
Wilczyński)
- Use kstrtobool() directly, sans strtobool() wrapper (Krzysztof
Wilczyński)
- Fix some sscanf(), sprintf() format mismatches (Krzysztof
Wilczyński)
- Update PCI subsystem information in MAINTAINERS (Krzysztof
Wilczyński)
- Correct some misspellings (Krzysztof Wilczyński)"
* tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (137 commits)
PCI: Add ACS quirk for Pericom PI7C9X2G switches
PCI: apple: Configure RID to SID mapper on device addition
iommu/dart: Exclude MSI doorbell from PCIe device IOVA range
PCI: apple: Implement MSI support
PCI: apple: Add INTx and per-port interrupt support
PCI: kirin: Allow removing the driver
PCI: kirin: De-init the dwc driver
PCI: kirin: Disable clkreq during poweroff sequence
PCI: kirin: Move the power-off code to a common routine
PCI: kirin: Add power_off support for Kirin 960 PHY
PCI: kirin: Allow building it as a module
PCI: kirin: Add MODULE_* macros
PCI: kirin: Add Kirin 970 compatible
PCI: kirin: Support PERST# GPIOs for HiKey970 external PEX 8606 bridge
PCI: apple: Set up reference clocks when probing
PCI: apple: Add initial hardware bring-up
PCI: of: Allow matching of an interrupt-map local to a PCI device
of/irq: Allow matching of an interrupt-map local to an interrupt controller
irqdomain: Make of_phandle_args_to_fwspec() generally available
PCI: Do not enable AtomicOps on VFs
...
Diffstat (limited to 'drivers')
97 files changed, 3410 insertions, 1146 deletions
diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c index d7deedf3548e..ab2f7dfb0c44 100644 --- a/drivers/acpi/pci_root.c +++ b/drivers/acpi/pci_root.c @@ -199,33 +199,20 @@ static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root, acpi_status status; u32 result, capbuf[3]; - support &= OSC_PCI_SUPPORT_MASKS; support |= root->osc_support_set; capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE; capbuf[OSC_SUPPORT_DWORD] = support; - if (control) { - *control &= OSC_PCI_CONTROL_MASKS; - capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set; - } else { - /* Run _OSC query only with existing controls. */ - capbuf[OSC_CONTROL_DWORD] = root->osc_control_set; - } + capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set; status = acpi_pci_run_osc(root->device->handle, capbuf, &result); if (ACPI_SUCCESS(status)) { root->osc_support_set = support; - if (control) - *control = result; + *control = result; } return status; } -static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags) -{ - return acpi_pci_query_osc(root, flags, NULL); -} - struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle) { struct acpi_pci_root *root; @@ -348,8 +335,9 @@ EXPORT_SYMBOL_GPL(acpi_get_pci_dev); * _OSC bits the BIOS has granted control of, but its contents are meaningless * on failure. **/ -static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req) +static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 support) { + u32 req = OSC_PCI_EXPRESS_CAPABILITY_CONTROL; struct acpi_pci_root *root; acpi_status status; u32 ctrl, capbuf[3]; @@ -357,22 +345,16 @@ static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 r if (!mask) return AE_BAD_PARAMETER; - ctrl = *mask & OSC_PCI_CONTROL_MASKS; - if ((ctrl & req) != req) - return AE_TYPE; - root = acpi_pci_find_root(handle); if (!root) return AE_NOT_EXIST; - *mask = ctrl | root->osc_control_set; - /* No need to evaluate _OSC if the control was already granted. */ - if ((root->osc_control_set & ctrl) == ctrl) - return AE_OK; + ctrl = *mask; + *mask |= root->osc_control_set; /* Need to check the available controls bits before requesting them. */ - while (*mask) { - status = acpi_pci_query_osc(root, root->osc_support_set, mask); + do { + status = acpi_pci_query_osc(root, support, mask); if (ACPI_FAILURE(status)) return status; if (ctrl == *mask) @@ -380,7 +362,11 @@ static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 r decode_osc_control(root, "platform does not support", ctrl & ~(*mask)); ctrl = *mask; - } + } while (*mask); + + /* No need to request _OSC if the control was already granted. */ + if ((root->osc_control_set & ctrl) == ctrl) + return AE_OK; if ((ctrl & req) != req) { decode_osc_control(root, "not requesting control; platform does not support", @@ -399,25 +385,9 @@ static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 r return AE_OK; } -static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, - bool is_pcie) +static u32 calculate_support(void) { - u32 support, control, requested; - acpi_status status; - struct acpi_device *device = root->device; - acpi_handle handle = device->handle; - - /* - * Apple always return failure on _OSC calls when _OSI("Darwin") has - * been called successfully. We know the feature set supported by the - * platform, so avoid calling _OSC at all - */ - if (x86_apple_machine) { - root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL; - decode_osc_control(root, "OS assumes control of", - root->osc_control_set); - return; - } + u32 support; /* * All supported architectures that use ACPI have support for @@ -434,30 +404,12 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, if (IS_ENABLED(CONFIG_PCIE_EDR)) support |= OSC_PCI_EDR_SUPPORT; - decode_osc_support(root, "OS supports", support); - status = acpi_pci_osc_support(root, support); - if (ACPI_FAILURE(status)) { - *no_aspm = 1; - - /* _OSC is optional for PCI host bridges */ - if ((status == AE_NOT_FOUND) && !is_pcie) - return; - - dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n", - acpi_format_exception(status)); - return; - } - - if (pcie_ports_disabled) { - dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n"); - return; - } + return support; +} - if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) { - decode_osc_support(root, "not requesting OS control; OS requires", - ACPI_PCIE_REQ_SUPPORT); - return; - } +static u32 calculate_control(void) +{ + u32 control; control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL | OSC_PCI_EXPRESS_PME_CONTROL; @@ -483,11 +435,59 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR)) control |= OSC_PCI_EXPRESS_DPC_CONTROL; - requested = control; - status = acpi_pci_osc_control_set(handle, &control, - OSC_PCI_EXPRESS_CAPABILITY_CONTROL); + return control; +} + +static bool os_control_query_checks(struct acpi_pci_root *root, u32 support) +{ + struct acpi_device *device = root->device; + + if (pcie_ports_disabled) { + dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n"); + return false; + } + + if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) { + decode_osc_support(root, "not requesting OS control; OS requires", + ACPI_PCIE_REQ_SUPPORT); + return false; + } + + return true; +} + +static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, + bool is_pcie) +{ + u32 support, control = 0, requested = 0; + acpi_status status; + struct acpi_device *device = root->device; + acpi_handle handle = device->handle; + + /* + * Apple always return failure on _OSC calls when _OSI("Darwin") has + * been called successfully. We know the feature set supported by the + * platform, so avoid calling _OSC at all + */ + if (x86_apple_machine) { + root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL; + decode_osc_control(root, "OS assumes control of", + root->osc_control_set); + return; + } + + support = calculate_support(); + + decode_osc_support(root, "OS supports", support); + + if (os_control_query_checks(root, support)) + requested = control = calculate_control(); + + status = acpi_pci_osc_control_set(handle, &control, support); if (ACPI_SUCCESS(status)) { - decode_osc_control(root, "OS now controls", control); + if (control) + decode_osc_control(root, "OS now controls", control); + if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) { /* * We have ASPM control, but the FADT indicates that @@ -498,11 +498,6 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, *no_aspm = 1; } } else { - decode_osc_control(root, "OS requested", requested); - decode_osc_control(root, "platform willing to grant", control); - dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n", - acpi_format_exception(status)); - /* * We want to disable ASPM here, but aspm_disabled * needs to remain in its state from boot so that we @@ -511,6 +506,18 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, * root scan. */ *no_aspm = 1; + + /* _OSC is optional for PCI host bridges */ + if ((status == AE_NOT_FOUND) && !is_pcie) + return; + + if (control) { + decode_osc_control(root, "OS requested", requested); + decode_osc_control(root, "platform willing to grant", control); + } + + dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n", + acpi_format_exception(status)); } } diff --git a/drivers/bcma/host_pci.c b/drivers/bcma/host_pci.c index 69c10a7b7c61..960632197b05 100644 --- a/drivers/bcma/host_pci.c +++ b/drivers/bcma/host_pci.c @@ -162,7 +162,6 @@ static int bcma_host_pci_probe(struct pci_dev *dev, { struct bcma_bus *bus; int err = -ENOMEM; - const char *name; u32 val; /* Alloc */ @@ -175,10 +174,7 @@ static int bcma_host_pci_probe(struct pci_dev *dev, if (err) goto err_kfree_bus; - name = dev_name(&dev->dev); - if (dev->driver && dev->driver->name) - name = dev->driver->name; - err = pci_request_regions(dev, name); + err = pci_request_regions(dev, "bcma-pci-bridge"); if (err) goto err_pci_disable; pci_set_master(dev); diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index fed52ae516ba..52d6cca6262e 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -3118,7 +3118,7 @@ static int qm_alloc_uacce(struct hisi_qm *qm) }; int ret; - ret = strscpy(interface.name, pdev->driver->name, + ret = strscpy(interface.name, dev_driver_string(&pdev->dev), sizeof(interface.name)); if (ret < 0) return -ENAMETOOLONG; diff --git a/drivers/crypto/qat/qat_4xxx/adf_drv.c b/drivers/crypto/qat/qat_4xxx/adf_drv.c index 359fb7989dfb..71ef065914b2 100644 --- a/drivers/crypto/qat/qat_4xxx/adf_drv.c +++ b/drivers/crypto/qat/qat_4xxx/adf_drv.c @@ -247,11 +247,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_master(pdev); - if (adf_enable_aer(accel_dev)) { - dev_err(&pdev->dev, "Failed to enable aer.\n"); - ret = -EFAULT; - goto out_err; - } + adf_enable_aer(accel_dev); if (pci_save_state(pdev)) { dev_err(&pdev->dev, "Failed to save pci state.\n"); @@ -304,6 +300,7 @@ static struct pci_driver adf_driver = { .probe = adf_probe, .remove = adf_remove, .sriov_configure = adf_sriov_configure, + .err_handler = &adf_err_handler, }; module_pci_driver(adf_driver); diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c index cc6e75dc60de..2aef0bb791df 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c @@ -33,6 +33,7 @@ static struct pci_driver adf_driver = { .probe = adf_probe, .remove = adf_remove, .sriov_configure = adf_sriov_configure, + .err_handler = &adf_err_handler, }; static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) @@ -192,11 +193,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) } pci_set_master(pdev); - if (adf_enable_aer(accel_dev)) { - dev_err(&pdev->dev, "Failed to enable aer\n"); - ret = -EFAULT; - goto out_err_free_reg; - } + adf_enable_aer(accel_dev); if (pci_save_state(pdev)) { dev_err(&pdev->dev, "Failed to save pci state\n"); diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c index bf251dfe74b3..56163083f161 100644 --- a/drivers/crypto/qat/qat_c62x/adf_drv.c +++ b/drivers/crypto/qat/qat_c62x/adf_drv.c @@ -33,6 +33,7 @@ static struct pci_driver adf_driver = { .probe = adf_probe, .remove = adf_remove, .sriov_configure = adf_sriov_configure, + .err_handler = &adf_err_handler, }; static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) @@ -192,11 +193,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) } pci_set_master(pdev); - if (adf_enable_aer(accel_dev)) { - dev_err(&pdev->dev, "Failed to enable aer\n"); - ret = -EFAULT; - goto out_err_free_reg; - } + adf_enable_aer(accel_dev); if (pci_save_state(pdev)) { dev_err(&pdev->dev, "Failed to save pci state\n"); diff --git a/drivers/crypto/qat/qat_common/adf_aer.c b/drivers/crypto/qat/qat_common/adf_aer.c index ed3e40bc56eb..fe9bb2f3536a 100644 --- a/drivers/crypto/qat/qat_common/adf_aer.c +++ b/drivers/crypto/qat/qat_common/adf_aer.c @@ -166,11 +166,12 @@ static void adf_resume(struct pci_dev *pdev) dev_info(&pdev->dev, "Device is up and running\n"); } -static const struct pci_error_handlers adf_err_handler = { +const struct pci_error_handlers adf_err_handler = { .error_detected = adf_error_detected, .slot_reset = adf_slot_reset, .resume = adf_resume, }; +EXPORT_SYMBOL_GPL(adf_err_handler); /** * adf_enable_aer() - Enable Advance Error Reporting for acceleration device @@ -179,17 +180,12 @@ static const struct pci_error_handlers adf_err_handler = { * Function enables PCI Advance Error Reporting for the * QAT acceleration device accel_dev. * To be used by QAT device specific drivers. - * - * Return: 0 on success, error code otherwise. */ -int adf_enable_aer(struct adf_accel_dev *accel_dev) +void adf_enable_aer(struct adf_accel_dev *accel_dev) { struct pci_dev *pdev = accel_to_pci_dev(accel_dev); - struct pci_driver *pdrv = pdev->driver; - pdrv->err_handler = &adf_err_handler; pci_enable_pcie_error_reporting(pdev); - return 0; } EXPORT_SYMBOL_GPL(adf_enable_aer); diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h index 2cc6622833c4..de94b76a6d2c 100644 --- a/drivers/crypto/qat/qat_common/adf_common_drv.h +++ b/drivers/crypto/qat/qat_common/adf_common_drv.h @@ -94,7 +94,8 @@ void adf_ae_fw_release(struct adf_accel_dev *accel_dev); int adf_ae_start(struct adf_accel_dev *accel_dev); int adf_ae_stop(struct adf_accel_dev *accel_dev); -int adf_enable_aer(struct adf_accel_dev *accel_dev); +extern const struct pci_error_handlers adf_err_handler; +void adf_enable_aer(struct adf_accel_dev *accel_dev); void adf_disable_aer(struct adf_accel_dev *accel_dev); void adf_reset_sbr(struct adf_accel_dev *accel_dev); void adf_reset_flr(struct adf_accel_dev *accel_dev); diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c index 3976a81bd99b..acca56752aa0 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c @@ -33,6 +33,7 @@ static struct pci_driver adf_driver = { .probe = adf_probe, .remove = adf_remove, .sriov_configure = adf_sriov_configure, + .err_handler = &adf_err_handler, }; static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) @@ -192,11 +193,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) } pci_set_master(pdev); - if (adf_enable_aer(accel_dev)) { - dev_err(&pdev->dev, "Failed to enable aer\n"); - ret = -EFAULT; - goto out_err_free_reg; - } + adf_enable_aer(accel_dev); if (pci_save_state(pdev)) { dev_err(&pdev->dev, "Failed to save pci state\n"); diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index 96d4a1f8de79..565ef5598811 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -15,6 +15,7 @@ #include <linux/bitfield.h> #include <linux/clk.h> #include <linux/dev_printk.h> +#include <linux/dma-iommu.h> #include <linux/dma-mapping.h> #include <linux/err.h> #include <linux/interrupt.h> @@ -737,6 +738,31 @@ static int apple_dart_def_domain_type(struct device *dev) return 0; } +#ifndef CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR +/* Keep things compiling when CONFIG_PCI_APPLE isn't selected */ +#define CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR 0 +#endif +#define DOORBELL_ADDR (CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR & PAGE_MASK) + +static void apple_dart_get_resv_regions(struct device *dev, + struct list_head *head) +{ + if (IS_ENABLED(CONFIG_PCIE_APPLE) && dev_is_pci(dev)) { + struct iommu_resv_region *region; + int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; + + region = iommu_alloc_resv_region(DOORBELL_ADDR, + PAGE_SIZE, prot, + IOMMU_RESV_MSI); + if (!region) + return; + + list_add_tail(®ion->list, head); + } + + iommu_dma_get_resv_regions(dev, head); +} + static const struct iommu_ops apple_dart_iommu_ops = { .domain_alloc = apple_dart_domain_alloc, .domain_free = apple_dart_domain_free, @@ -753,6 +779,8 @@ static const struct iommu_ops apple_dart_iommu_ops = { .device_group = apple_dart_device_group, .of_xlate = apple_dart_of_xlate, .def_domain_type = apple_dart_def_domain_type, + .get_resv_regions = apple_dart_get_resv_regions, + .put_resv_regions = generic_iommu_put_resv_regions, .pgsize_bitmap = -1UL, /* Restricted during dart probe */ }; diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c index 7f7abc9069f7..b94d5e4fdc23 100644 --- a/drivers/message/fusion/mptbase.c +++ b/drivers/message/fusion/mptbase.c @@ -829,7 +829,6 @@ int mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx) { MPT_ADAPTER *ioc; - const struct pci_device_id *id; if (!cb_idx || cb_idx >= MPT_MAX_PROTOCOL_DRIVERS) return -EINVAL; @@ -838,10 +837,8 @@ mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx) /* call per pci device probe entry point */ list_for_each_entry(ioc, &ioc_list, list) { - id = ioc->pcidev->driver ? - ioc->pcidev->driver->id_table : NULL; if (dd_cbfunc->probe) - dd_cbfunc->probe(ioc->pcidev, id); + dd_cbfunc->probe(ioc->pcidev); } return 0; @@ -2032,7 +2029,7 @@ mpt_attach(struct pci_dev *pdev, const struct pci_device_id *id) for(cb_idx = 0; cb_idx < MPT_MAX_PROTOCOL_DRIVERS; cb_idx++) { if(MptDeviceDriverHandlers[cb_idx] && MptDeviceDriverHandlers[cb_idx]->probe) { - MptDeviceDriverHandlers[cb_idx]->probe(pdev,id); + MptDeviceDriverHandlers[cb_idx]->probe(pdev); } } diff --git a/drivers/message/fusion/mptbase.h b/drivers/message/fusion/mptbase.h index b9e0376be723..4bd0682c65d3 100644 --- a/drivers/message/fusion/mptbase.h +++ b/drivers/message/fusion/mptbase.h @@ -257,7 +257,7 @@ typedef enum { } MPT_DRIVER_CLASS; struct mpt_pci_driver{ - int (*probe) (struct pci_dev *dev, const struct pci_device_id *id); + int (*probe) (struct pci_dev *dev); void (*remove) (struct pci_dev *dev); }; diff --git a/drivers/message/fusion/mptctl.c b/drivers/message/fusion/mptctl.c index 72025996cd70..ae433c150b37 100644 --- a/drivers/message/fusion/mptctl.c +++ b/drivers/message/fusion/mptctl.c @@ -114,7 +114,7 @@ static int mptctl_do_reset(MPT_ADAPTER *iocp, unsigned long arg); static int mptctl_hp_hostinfo(MPT_ADAPTER *iocp, unsigned long arg, unsigned int cmd); static int mptctl_hp_targetinfo(MPT_ADAPTER *iocp, unsigned long arg); -static int mptctl_probe(struct pci_dev *, const struct pci_device_id *); +static int mptctl_probe(struct pci_dev *); static void mptctl_remove(struct pci_dev *); #ifdef CONFIG_COMPAT @@ -2838,7 +2838,7 @@ static long compat_mpctl_ioctl(struct file *f, unsigned int cmd, unsigned long a */ static int -mptctl_probe(struct pci_dev *pdev, const struct pci_device_id *id) +mptctl_probe(struct pci_dev *pdev) { MPT_ADAPTER *ioc = pci_get_drvdata(pdev); diff --git a/drivers/message/fusion/mptlan.c b/drivers/message/fusion/mptlan.c index acdc257a900e..117fa4ebf6d7 100644 --- a/drivers/message/fusion/mptlan.c +++ b/drivers/message/fusion/mptlan.c @@ -1377,7 +1377,7 @@ mpt_register_lan_device (MPT_ADAPTER *mpt_dev, int pnum) } static int -mptlan_probe(struct pci_dev *pdev, const struct pci_device_id *id) +mptlan_probe(struct pci_dev *pdev) { MPT_ADAPTER *ioc = pci_get_drvdata(pdev); struct net_device *dev; diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c index 186308f1f8eb..9d485c9e3fff 100644 --- a/drivers/misc/cxl/guest.c +++ b/drivers/misc/cxl/guest.c @@ -20,34 +20,38 @@ static void pci_error_handlers(struct cxl_afu *afu, pci_channel_state_t state) { struct pci_dev *afu_dev; + struct pci_driver *afu_drv; + const struct pci_error_handlers *err_handler; if (afu->phb == NULL) return; list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { - if (!afu_dev->driver) + afu_drv = to_pci_driver(afu_dev->dev.driver); + if (!afu_drv) continue; + err_handler = afu_drv->err_handler; switch (bus_error_event) { case CXL_ERROR_DETECTED_EVENT: afu_dev->error_state = state; - if (afu_dev->driver->err_handler && - afu_dev->driver->err_handler->error_detected) - afu_dev->driver->err_handler->error_detected(afu_dev, state); - break; + if (err_handler && + err_handler->error_detected) + err_handler->error_detected(afu_dev, state); + break; case CXL_SLOT_RESET_EVENT: afu_dev->error_state = state; - if (afu_dev->driver->err_handler && - afu_dev->driver->err_handler->slot_reset) - afu_dev->driver->err_handler->slot_reset(afu_dev); - break; + if (err_handler && + err_handler->slot_reset) + err_handler->slot_reset(afu_dev); + break; case CXL_RESUME_EVENT: - if (afu_dev->driver->err_handler && - afu_dev->driver->err_handler->resume) - afu_dev->driver->err_handler->resume(afu_dev); - break; + if (err_handler && + err_handler->resume) + err_handler->resume(afu_dev); + break; } } } diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c index 2ba899f5659f..3de0aea62ade 100644 --- a/drivers/misc/cxl/pci.c +++ b/drivers/misc/cxl/pci.c @@ -1795,6 +1795,8 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu, pci_channel_state_t state) { struct pci_dev *afu_dev; + struct pci_driver *afu_drv; + const struct pci_error_handlers *err_handler; pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET; @@ -1805,14 +1807,16 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu, return result; list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { - if (!afu_dev->driver) + afu_drv = to_pci_driver(afu_dev->dev.driver); + if (!afu_drv) continue; afu_dev->error_state = state; - if (afu_dev->driver->err_handler) - afu_result = afu_dev->driver->err_handler->error_detected(afu_dev, - state); + err_handler = afu_drv->err_handler; + if (err_handler) + afu_result = err_handler->error_detected(afu_dev, + state); /* Disconnect trumps all, NONE trumps NEED_RESET */ if (afu_result == PCI_ERS_RESULT_DISCONNECT) result = PCI_ERS_RESULT_DISCONNECT; @@ -1972,6 +1976,8 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) struct cxl_afu *afu; struct cxl_context *ctx; struct pci_dev *afu_dev; + struct pci_driver *afu_drv; + const struct pci_error_handlers *err_handler; pci_ers_result_t afu_result = PCI_ERS_RESULT_RECOVERED; pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; int i; @@ -2028,12 +2034,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) * shouldn't start new work until we call * their resume function. */ - if (!afu_dev->driver) + afu_drv = to_pci_driver(afu_dev->dev.driver); + if (!afu_drv) continue; - if (afu_dev->driver->err_handler && - afu_dev->driver->err_handler->slot_reset) - afu_result = afu_dev->driver->err_handler->slot_reset(afu_dev); + err_handler = afu_drv->err_handler; + if (err_handler && err_handler->slot_reset) + afu_result = err_handler->slot_reset(afu_dev); if (afu_result == PCI_ERS_RESULT_DISCONNECT) result = PCI_ERS_RESULT_DISCONNECT; @@ -2060,6 +2067,8 @@ static void cxl_pci_resume(struct pci_dev *pdev) struct cxl *adapter = pci_get_drvdata(pdev); struct cxl_afu *afu; struct pci_dev *afu_dev; + struct pci_driver *afu_drv; + const struct pci_error_handlers *err_handler; int i; /* Everything is back now. Drivers should restart work now. @@ -2074,9 +2083,13 @@ static void cxl_pci_resume(struct pci_dev *pdev) continue; list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { - if (afu_dev->driver && afu_dev->driver->err_handler && - afu_dev->driver->err_handler->resume) - afu_dev->driver->err_handler->resume(afu_dev); + afu_drv = to_pci_driver(afu_dev->dev.driver); + if (!afu_drv) + continue; + + err_handler = afu_drv->err_handler; + if (err_handler && err_handler->resume) + err_handler->resume(afu_dev); } } spin_unlock(&adapter->afu_list_lock); diff --git a/drivers/net/ethernet/chelsio/cxgb3/common.h b/drivers/net/ethernet/chelsio/cxgb3/common.h index a309016f7f8c..ecd025dda8d6 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/common.h +++ b/drivers/net/ethernet/chelsio/cxgb3/common.h @@ -676,8 +676,6 @@ void t3_link_changed(struct adapter *adapter, int port_id); void t3_link_fault(struct adapter *adapter, int port_id); int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc); const struct adapter_info *t3_get_adapter_info(unsigned int board_id); -int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data); -int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data); int t3_seeprom_wp(struct adapter *adapter, int enable); int t3_get_tp_version(struct adapter *adapter, u32 *vers); int t3_check_tpsram_version(struct adapter *adapter); diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c index 9cf9e33664e4..bfffcaeee624 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c +++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c @@ -2036,20 +2036,16 @@ static int get_eeprom(struct net_device *dev, struct ethtool_eeprom *e, { struct port_info *pi = netdev_priv(dev); struct adapter *adapter = pi->adapter; - int i, err = 0; - - u8 *buf = kmalloc(EEPROMSIZE, GFP_KERNEL); - if (!buf) - return -ENOMEM; + int cnt; e->magic = EEPROM_MAGIC; - for (i = e->offset & ~3; !err && i < e->offset + e->len; i += 4) - err = t3_seeprom_read(adapter, i, (__le32 *) & buf[i]); + cnt = pci_read_vpd(adapter->pdev, e->offset, e->len, data); + if (cnt < 0) + return cnt; - if (!err) - memcpy(data, buf + e->offset, e->len); - kfree(buf); - return err; + e->len = cnt; + + return 0; } static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, @@ -2058,7 +2054,6 @@ static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, struct port_info *pi = netdev_priv(dev); struct adapter *adapter = pi->adapter; u32 aligned_offset, aligned_len; - __le32 *p; u8 *buf; int err; @@ -2072,12 +2067,9 @@ static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, buf = kmalloc(aligned_len, GFP_KERNEL); if (!buf) return -ENOMEM; - err = t3_seeprom_read(adapter, aligned_offset, (__le32 *) buf); - if (!err && aligned_len > 4) - err = t3_seeprom_read(adapter, - aligned_offset + aligned_len - 4, - (__le32 *) & buf[aligned_len - 4]); - if (err) + err = pci_read_vpd(adapter->pdev, aligned_offset, aligned_len, + buf); + if (err < 0) goto out; memcpy(buf + (eeprom->offset & 3), data, eeprom->len); } else @@ -2087,17 +2079,13 @@ static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, if (err) goto out; - for (p = (__le32 *) buf; !err && aligned_len; aligned_len -= 4, p++) { - err = t3_seeprom_write(adapter, aligned_offset, *p); - aligned_offset += 4; - } - - if (!err) + err = pci_write_vpd(adapter->pdev, aligned_offset, aligned_len, buf); + if (err >= 0) err = t3_seeprom_wp(adapter, 1); out: if (buf != data) kfree(buf); - return err; + return err < 0 ? err : 0; } static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) diff --git a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c index 53feac8da503..da41eee2f25c 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c +++ b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c @@ -596,81 +596,10 @@ struct t3_vpd { u32 pad; /* for multiple-of-4 sizing and alignment */ }; -#define EEPROM_MAX_POLL 40 #define EEPROM_STAT_ADDR 0x4000 #define VPD_BASE 0xc00 /** - * t3_seeprom_read - read a VPD EEPROM location - * @adapter: adapter to read - * @addr: EEPROM address - * @data: where to store the read data - * - * Read a 32-bit word from a location in VPD EEPROM using the card's PCI - * VPD ROM capability. A zero is written to the flag bit when the - * address is written to the control register. The hardware device will - * set the flag to 1 when 4 bytes have been read into the data register. - */ -int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data) -{ - u16 val; - int attempts = EEPROM_MAX_POLL; - u32 v; - unsigned int base = adapter->params.pci.vpd_cap_addr; - - if ((addr >= EEPROMSIZE && addr != EEPROM_STAT_ADDR) || (addr & 3)) - return -EINVAL; - - pci_write_config_word(adapter->pdev, base + PCI_VPD_ADDR, addr); - do { - udelay(10); - pci_read_config_word(adapter->pdev, base + PCI_VPD_ADDR, &val); - } while (!(val & PCI_VPD_ADDR_F) && --attempts); - - if (!(val & PCI_VPD_ADDR_F)) { - CH_ERR(adapter, "reading EEPROM address 0x%x failed\n", addr); - return -EIO; - } - pci_read_config_dword(adapter->pdev, base + PCI_VPD_DATA, &v); - *data = cpu_to_le32(v); - return 0; -} - -/** - * t3_seeprom_write - write a VPD EEPROM location - * @adapter: adapter to write - * @addr: EEPROM address - * @data: value to write - * - * Write a 32-bit word to a location in VPD EEPROM using the card's PCI - * VPD ROM capability. - */ -int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data) -{ - u16 val; - int attempts = EEPROM_MAX_POLL; - unsigned int base = adapter->params.pci.vpd_cap_addr; - - if ((addr >= EEPROMSIZE && addr != EEPROM_STAT_ADDR) || (addr & 3)) - return -EINVAL; - - pci_write_config_dword(adapter->pdev, base + PCI_VPD_DATA, - le32_to_cpu(data)); - pci_write_config_word(adapter->pdev,base + PCI_VPD_ADDR, - addr | PCI_VPD_ADDR_F); - do { - msleep(1); - pci_read_config_word(adapter->pdev, base + PCI_VPD_ADDR, &val); - } while ((val & PCI_VPD_ADDR_F) && --attempts); - - if (val & PCI_VPD_ADDR_F) { - CH_ERR(adapter, "write to EEPROM address 0x%x failed\n", addr); - return -EIO; - } - return 0; -} - -/** * t3_seeprom_wp - enable/disable EEPROM write protection * @adapter: the adapter * @enable: 1 to enable write protection, 0 to disable it @@ -679,7 +608,14 @@ int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data) */ int t3_seeprom_wp(struct adapter *adapter, int enable) { - return t3_seeprom_write(adapter, EEPROM_STAT_ADDR, enable ? 0xc : 0); + u32 data = enable ? 0xc : 0; + int ret; + + /* EEPROM_STAT_ADDR is outside VPD area, use pci_write_vpd_any() */ + ret = pci_write_vpd_any(adapter->pdev, EEPROM_STAT_ADDR, sizeof(u32), + &data); + + return ret < 0 ? ret : 0; } static int vpdstrtouint(char *s, u8 len, unsigned int base, unsigned int *val) @@ -709,24 +645,22 @@ static int vpdstrtou16(char *s, u8 len, unsigned int base, u16 *val) */ static int get_vpd_params(struct adapter *adapter, struct vpd_params *p) { - int i, addr, ret; struct t3_vpd vpd; + u8 base_val = 0; + int addr, ret; /* * Card information is normally at VPD_BASE but some early cards had * it at 0. */ - ret = t3_seeprom_read(adapter, VPD_BASE, (__le32 *)&vpd); - if (ret) + ret = pci_read_vpd(adapter->pdev, VPD_BASE, 1, &base_val); + if (ret < 0) return ret; - addr = vpd.id_tag == 0x82 ? VPD_BASE : 0; + addr = base_val == PCI_VPD_LRDT_ID_STRING ? VPD_BASE : 0; - for (i = 0; i < sizeof(vpd); i += 4) { - ret = t3_seeprom_read(adapter, addr + i, - (__le32 *)((u8 *)&vpd + i)); - if (ret) - return ret; - } + ret = pci_read_vpd(adapter->pdev, addr, sizeof(vpd), &vpd); + if (ret < 0) + return ret; ret = vpdstrtouint(vpd.cclk_data, vpd.cclk_len, 10, &p->cclk); if (ret) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c index 5ebd96f6833d..9fdedd83f392 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c @@ -608,7 +608,7 @@ static void hns3_get_drvinfo(struct net_device *netdev, return; } - strncpy(drvinfo->driver, h->pdev->driver->name, + strncpy(drvinfo->driver, dev_driver_string(&h->pdev->dev), sizeof(drvinfo->driver)); drvinfo->driver[sizeof(drvinfo->driver) - 1] = '\0'; diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers/net/ethernet/marvell/prestera/prestera_pci.c index 5d4d410b07c8..d650082496d6 100644 --- a/drivers/net/ethernet/marvell/prestera/prestera_pci.c +++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c @@ -776,7 +776,7 @@ out_release: static int prestera_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { - const char *driver_name = pdev->driver->name; + const char *driver_name = dev_driver_string(&pdev->dev); struct prestera_fw *fw; int err; diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c index fcace73eae40..a15c95a10bae 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/pci.c +++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c @@ -1875,7 +1875,7 @@ static void mlxsw_pci_cmd_fini(struct mlxsw_pci *mlxsw_pci) static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { - const char *driver_name = pdev->driver->name; + const char *driver_name = dev_driver_string(&pdev->dev); struct mlxsw_pci *mlxsw_pci; int err; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index 0685ece1f155..1de076f55740 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -202,7 +202,8 @@ nfp_get_drvinfo(struct nfp_app *app, struct pci_dev *pdev, { char nsp_version[ETHTOOL_FWVERS_LEN] = {}; - strlcpy(drvinfo->driver, pdev->driver->name, sizeof(drvinfo->driver)); + strlcpy(drvinfo->driver, dev_driver_string(&pdev->dev), + sizeof(drvinfo->driver)); nfp_net_get_nspinfo(app, nsp_version); snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), "%s %s %s %s", vnic_version, nsp_version, diff --git a/drivers/of/irq.c b/drivers/of/irq.c index 352e14b007e7..32be5a03951f 100644 --- a/drivers/of/irq.c +++ b/drivers/of/irq.c @@ -156,10 +156,14 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq) /* Now start the actual "proper" walk of the interrupt tree */ while (ipar != NULL) { - /* Now check if cursor is an interrupt-controller and if it is - * then we are done + /* + * Now check if cursor is an interrupt-controller and + * if it is then we are done, unless there is an + * interrupt-map which takes precedence. */ - if (of_property_read_bool(ipar, "interrupt-controller")) { + imap = of_get_property(ipar, "interrupt-map", &imaplen); + if (imap == NULL && + of_property_read_bool(ipar, "interrupt-controller")) { pr_debug(" -> got it !\n"); return 0; } @@ -173,8 +177,6 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq) goto fail; } - /* Now look for an interrupt-map */ - imap = of_get_property(ipar, "interrupt-map", &imaplen); /* No interrupt map, check for an interrupt parent */ if (imap == NULL) { pr_debug(" -> no map, getting parent\n"); @@ -255,6 +257,11 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq) out_irq->args_count = intsize = newintsize; addrsize = newaddrsize; + if (ipar == newpar) { + pr_debug("%pOF interrupt-map entry to self\n", ipar); + return 0; + } + skiplevel: /* Iterate again with new parent */ out_irq->np = newpar; diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig index 326f7d13024f..e917bb3652bb 100644 --- a/drivers/pci/controller/Kconfig +++ b/drivers/pci/controller/Kconfig @@ -254,7 +254,7 @@ config PCIE_MEDIATEK_GEN3 MediaTek SoCs. config VMD - depends on PCI_MSI && X86_64 && SRCU + depends on PCI_MSI && X86_64 && SRCU && !UML tristate "Intel Volume Management Device Driver" help Adds support for the Intel Volume Management Device (VMD). VMD is a @@ -312,6 +312,32 @@ config PCIE_HISI_ERR Say Y here if you want error handling support for the PCIe controller's errors on HiSilicon HIP SoCs +config PCIE_APPLE_MSI_DOORBELL_ADDR + hex + default 0xfffff000 + depends on PCIE_APPLE + +config PCIE_APPLE + tristate "Apple PCIe controller" + depends on ARCH_APPLE || COMPILE_TEST + depends on OF + depends on PCI_MSI_IRQ_DOMAIN + select PCI_HOST_COMMON + help + Say Y here if you want to enable PCIe controller support on Apple + system-on-chips, like the Apple M1. This is required for the USB + type-A ports, Ethernet, Wi-Fi, and Bluetooth. + + If unsure, say Y if you have an Apple Silicon system. + +config PCIE_MT7621 + tristate "MediaTek MT7621 PCIe Controller" + depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST) + select PHY_MT7621_PCI + default SOC_MT7621 + help + This selects a driver for the MediaTek MT7621 PCIe Controller. + source "drivers/pci/controller/dwc/Kconfig" source "drivers/pci/controller/mobiveil/Kconfig" source "drivers/pci/controller/cadence/Kconfig" diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile index aaf30b3dcc14..37c8663de7fe 100644 --- a/drivers/pci/controller/Makefile +++ b/drivers/pci/controller/Makefile @@ -37,6 +37,9 @@ obj-$(CONFIG_VMD) += vmd.o obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o +obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o +obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o + # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW obj-y += dwc/ obj-y += mobiveil/ diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c index ffb176d288cd..918e11082e6a 100644 --- a/drivers/pci/controller/cadence/pci-j721e.c +++ b/drivers/pci/controller/cadence/pci-j721e.c @@ -474,7 +474,7 @@ static int j721e_pcie_probe(struct platform_device *pdev) ret = clk_prepare_enable(clk); if (ret) { dev_err(dev, "failed to enable pcie_refclk\n"); - goto err_get_sync; + goto err_pcie_setup; } pcie->refclk = clk; diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c index 5fee0f89ab59..a224afadbcc0 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c @@ -127,6 +127,8 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) goto err_init; } + return 0; + err_init: err_get_sync: pm_runtime_put_sync(dev); diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig index 76c0a63a3f64..62ce3abf0f19 100644 --- a/drivers/pci/controller/dwc/Kconfig +++ b/drivers/pci/controller/dwc/Kconfig @@ -8,22 +8,20 @@ config PCIE_DW config PCIE_DW_HOST bool - depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW config PCIE_DW_EP bool - depends on PCI_ENDPOINT select PCIE_DW config PCI_DRA7XX - bool + tristate config PCI_DRA7XX_HOST - bool "TI DRA7xx PCIe controller Host Mode" + tristate "TI DRA7xx PCIe controller Host Mode" depends on SOC_DRA7XX || COMPILE_TEST - depends on PCI_MSI_IRQ_DOMAIN depends on OF && HAS_IOMEM && TI_PIPE3 + depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST select PCI_DRA7XX default y if SOC_DRA7XX @@ -36,10 +34,10 @@ config PCI_DRA7XX_HOST This uses the DesignWare core. config PCI_DRA7XX_EP - bool "TI DRA7xx PCIe controller Endpoint Mode" + tristate "TI DRA7xx PCIe controller Endpoint Mode" depends on SOC_DRA7XX || COMPILE_TEST - depends on PCI_ENDPOINT depends on OF && HAS_IOMEM && TI_PIPE3 + depends on PCI_ENDPOINT select PCIE_DW_EP select PCI_DRA7XX help @@ -55,7 +53,7 @@ config PCIE_DW_PLAT config PCIE_DW_PLAT_HOST bool "Platform bus based DesignWare PCIe Controller - Host mode" - depends on PCI && PCI_MSI_IRQ_DOMAIN + depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST select PCIE_DW_PLAT help @@ -138,8 +136,8 @@ config PCI_LAYERSCAPE bool "Freescale Layerscape PCIe controller - Host mode" depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) depends on PCI_MSI_IRQ_DOMAIN - select MFD_SYSCON select PCIE_DW_HOST + select MFD_SYSCON help Say Y here if you want to enable PCIe controller support on Layerscape SoCs to work in Host mode. @@ -180,6 +178,16 @@ config PCIE_QCOM PCIe controller uses the DesignWare core plus Qualcomm-specific hardware wrappers. +config PCIE_QCOM_EP + tristate "Qualcomm PCIe controller - Endpoint mode" + depends on OF && (ARCH_QCOM || COMPILE_TEST) + depends on PCI_ENDPOINT + select PCIE_DW_EP + help + Say Y here to enable support for the PCIe controllers on Qualcomm SoCs + to work in endpoint mode. The PCIe controller uses the DesignWare core + plus Qualcomm-specific hardware wrappers. + config PCIE_ARMADA_8K bool "Marvell Armada-8K PCIe controller" depends on ARCH_MVEBU || COMPILE_TEST @@ -266,7 +274,7 @@ config PCIE_KEEMBAY_EP config PCIE_KIRIN depends on OF && (ARM64 || COMPILE_TEST) - bool "HiSilicon Kirin series SoCs PCIe controllers" + tristate "HiSilicon Kirin series SoCs PCIe controllers" depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST help @@ -283,8 +291,8 @@ config PCIE_HISI_STB config PCI_MESON tristate "MESON PCIe controller" - depends on PCI_MSI_IRQ_DOMAIN default m if ARCH_MESON + depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST help Say Y here if you want to enable PCI controller support on Amlogic diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile index 73244409792c..8ba7b67f5e50 100644 --- a/drivers/pci/controller/dwc/Makefile +++ b/drivers/pci/controller/dwc/Makefile @@ -12,6 +12,7 @@ obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o +obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index fbbb78f6885e..a4221f6f3629 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c @@ -7,6 +7,7 @@ * Authors: Kishon Vijay Abraham I <kishon@ti.com> */ +#include <linux/clk.h> #include <linux/delay.h> #include <linux/device.h> #include <linux/err.h> @@ -14,7 +15,7 @@ #include <linux/irq.h> #include <linux/irqdomain.h> #include <linux/kernel.h> -#include <linux/init.h> +#include <linux/module.h> #include <linux/of_device.h> #include <linux/of_gpio.h> #include <linux/of_pci.h> @@ -90,6 +91,7 @@ struct dra7xx_pcie { int phy_count; /* DT phy-names count */ struct phy **phy; struct irq_domain *irq_domain; + struct clk *clk; enum dw_pcie_device_mode mode; }; @@ -607,6 +609,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = { }, {}, }; +MODULE_DEVICE_TABLE(of, of_dra7xx_pcie_match); /* * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 @@ -740,6 +743,15 @@ static int dra7xx_pcie_probe(struct platform_device *pdev) if (!link) return -ENOMEM; + dra7xx->clk = devm_clk_get_optional(dev, NULL); + if (IS_ERR(dra7xx->clk)) + return dev_err_probe(dev, PTR_ERR(dra7xx->clk), + "clock request failed"); + + ret = clk_prepare_enable(dra7xx->clk); + if (ret) + return ret; + for (i = 0; i < phy_count; i++) { snprintf(name, sizeof(name), "pcie-phy%d", i); phy[i] = devm_phy_get(dev, name); @@ -925,6 +937,8 @@ static void dra7xx_pcie_shutdown(struct platform_device *pdev) pm_runtime_disable(dev); dra7xx_pcie_disable_phy(dra7xx); + + clk_disable_unprepare(dra7xx->clk); } static const struct dev_pm_ops dra7xx_pcie_pm_ops = { @@ -943,4 +957,8 @@ static struct platform_driver dra7xx_pcie_driver = { }, .shutdown = dra7xx_pcie_shutdown, }; -builtin_platform_driver(dra7xx_pcie_driver); +module_platform_driver(dra7xx_pcie_driver); + +MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>"); +MODULE_DESCRIPTION("PCIe controller driver for TI DRA7xx SoCs"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 80fc98acf097..26f49f797b0f 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c @@ -1132,7 +1132,7 @@ static int imx6_pcie_probe(struct platform_device *pdev) /* Limit link speed */ pci->link_gen = 1; - ret = of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); + of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); if (IS_ERR(imx6_pcie->vpcie)) { diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 998b698f4085..0eda8236c125 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -83,6 +83,7 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) for (func_no = 0; func_no < funcs; func_no++) __dw_pcie_ep_reset_bar(pci, func_no, bar, 0); } +EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar); static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no, u8 cap_ptr, u8 cap) @@ -485,6 +486,7 @@ int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no) return -EINVAL; } +EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_legacy_irq); int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, u8 interrupt_num) @@ -536,6 +538,7 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, return 0; } +EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq); int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, u16 interrupt_num) diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index d1d9b8344ec9..f4755f3a03be 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -335,6 +335,16 @@ int dw_pcie_host_init(struct pcie_port *pp) if (pci->link_gen < 1) pci->link_gen = of_pci_get_max_link_speed(np); + /* Set default bus ops */ + bridge->ops = &dw_pcie_ops; + bridge->child_ops = &dw_child_pcie_ops; + + if (pp->ops->host_init) { + ret = pp->ops->host_init(pp); + if (ret) + return ret; + } + if (pci_msi_enabled()) { pp->has_msi_ctrl = !(pp->ops->msi_host_init || of_property_read_bool(np, "msi-parent") || @@ -388,15 +398,6 @@ int dw_pcie_host_init(struct pcie_port *pp) } } - /* Set default bus ops */ - bridge->ops = &dw_pcie_ops; - bridge->child_ops = &dw_child_pcie_ops; - - if (pp->ops->host_init) { - ret = pp->ops->host_init(pp); - if (ret) - goto err_free_msi; - } dw_pcie_iatu_detect(pci); dw_pcie_setup_rc(pp); diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index a945f0c0e73d..850b4533f4ef 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -538,6 +538,7 @@ int dw_pcie_link_up(struct dw_pcie *pci) return ((val & PCIE_PORT_DEBUG1_LINK_UP) && (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING))); } +EXPORT_SYMBOL_GPL(dw_pcie_link_up); void dw_pcie_upconfig_setup(struct dw_pcie *pci) { diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c index 026fd1e42a55..095afbccf9c1 100644 --- a/drivers/pci/controller/dwc/pcie-kirin.c +++ b/drivers/pci/controller/dwc/pcie-kirin.c @@ -8,16 +8,18 @@ * Author: Xiaowei Song <songxiaowei@huawei.com> */ -#include <linux/compiler.h> #include <linux/clk.h> +#include <linux/compiler.h> #include <linux/delay.h> #include <linux/err.h> #include <linux/gpio.h> #include <linux/interrupt.h> #include <linux/mfd/syscon.h> #include <linux/of_address.h> +#include <linux/of_device.h> #include <linux/of_gpio.h> #include <linux/of_pci.h> +#include <linux/phy/phy.h> #include <linux/pci.h> #include <linux/pci_regs.h> #include <linux/platform_device.h> @@ -28,26 +30,16 @@ #define to_kirin_pcie(x) dev_get_drvdata((x)->dev) -#define REF_CLK_FREQ 100000000 - /* PCIe ELBI registers */ #define SOC_PCIECTRL_CTRL0_ADDR 0x000 #define SOC_PCIECTRL_CTRL1_ADDR 0x004 -#define SOC_PCIEPHY_CTRL2_ADDR 0x008 -#define SOC_PCIEPHY_CTRL3_ADDR 0x00c #define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21) /* info located in APB */ #define PCIE_APP_LTSSM_ENABLE 0x01c -#define PCIE_APB_PHY_CTRL0 0x0 -#define PCIE_APB_PHY_CTRL1 0x4 #define PCIE_APB_PHY_STATUS0 0x400 #define PCIE_LINKUP_ENABLE (0x8020) #define PCIE_LTSSM_ENABLE_BIT (0x1 << 11) -#define PIPE_CLK_STABLE (0x1 << 19) -#define PHY_REF_PAD_BIT (0x1 << 8) -#define PHY_PWR_DOWN_BIT (0x1 << 22) -#define PHY_RST_ACK_BIT (0x1 << 16) /* info located in sysctrl */ #define SCTRL_PCIE_CMOS_OFFSET 0x60 @@ -60,17 +52,70 @@ #define PCIE_DEBOUNCE_PARAM 0xF0F400 #define PCIE_OE_BYPASS (0x3 << 28) +/* + * Max number of connected PCI slots at an external PCI bridge + * + * This is used on HiKey 970, which has a PEX 8606 bridge with 4 connected + * lanes (lane 0 upstream, and the other three lanes, one connected to an + * in-board Ethernet adapter and the other two connected to M.2 and mini + * PCI slots. + * + * Each slot has a different clock source and uses a separate PERST# pin. + */ +#define MAX_PCI_SLOTS 3 + +enum pcie_kirin_phy_type { + PCIE_KIRIN_INTERNAL_PHY, + PCIE_KIRIN_EXTERNAL_PHY +}; + +struct kirin_pcie { + enum pcie_kirin_phy_type type; + + struct dw_pcie *pci; + struct regmap *apb; + struct phy *phy; + void *phy_priv; /* only for PCIE_KIRIN_INTERNAL_PHY */ + + /* DWC PERST# */ + int gpio_id_dwc_perst; + + /* Per-slot PERST# */ + int num_slots; + int gpio_id_reset[MAX_PCI_SLOTS]; + const char *reset_names[MAX_PCI_SLOTS]; + + /* Per-slot clkreq */ + int n_gpio_clkreq; + int gpio_id_clkreq[MAX_PCI_SLOTS]; + const char *clkreq_names[MAX_PCI_SLOTS]; +}; + +/* + * Kirin 960 PHY. Can't be split into a PHY driver without changing the + * DT schema. + */ + +#define REF_CLK_FREQ 100000000 + +/* PHY info located in APB */ +#define PCIE_APB_PHY_CTRL0 0x0 +#define PCIE_APB_PHY_CTRL1 0x4 +#define PCIE_APB_PHY_STATUS0 0x400 +#define PIPE_CLK_STABLE BIT(19) +#define PHY_REF_PAD_BIT BIT(8) +#define PHY_PWR_DOWN_BIT BIT(22) +#define PHY_RST_ACK_BIT BIT(16) + /* peri_crg ctrl */ #define CRGCTRL_PCIE_ASSERT_OFFSET 0x88 #define CRGCTRL_PCIE_ASSERT_BIT 0x8c000000 /* Time for delay */ -#define REF_2_PERST_MIN 20000 +#define REF_2_PERST_MIN 21000 #define REF_2_PERST_MAX 25000 #define PERST_2_ACCESS_MIN 10000 #define PERST_2_ACCESS_MAX 12000 -#define LINK_WAIT_MIN 900 -#define LINK_WAIT_MAX 1000 #define PIPE_CLK_WAIT_MIN 550 #define PIPE_CLK_WAIT_MAX 600 #define TIME_CMOS_MIN 100 @@ -78,118 +123,101 @@ #define TIME_PHY_PD_MIN 10 #define TIME_PHY_PD_MAX 11 -struct kirin_pcie { - struct dw_pcie *pci; - void __iomem *apb_base; - void __iomem *phy_base; +struct hi3660_pcie_phy { + struct device *dev; + void __iomem *base; struct regmap *crgctrl; struct regmap *sysctrl; struct clk *apb_sys_clk; struct clk *apb_phy_clk; struct clk *phy_ref_clk; - struct clk *pcie_aclk; - struct clk *pcie_aux_clk; - int gpio_id_reset; + struct clk *aclk; + struct clk *aux_clk; }; -/* Registers in PCIeCTRL */ -static inline void kirin_apb_ctrl_writel(struct kirin_pcie *kirin_pcie, - u32 val, u32 reg) -{ - writel(val, kirin_pcie->apb_base + reg); -} - -static inline u32 kirin_apb_ctrl_readl(struct kirin_pcie *kirin_pcie, u32 reg) -{ - return readl(kirin_pcie->apb_base + reg); -} - /* Registers in PCIePHY */ -static inline void kirin_apb_phy_writel(struct kirin_pcie *kirin_pcie, +static inline void kirin_apb_phy_writel(struct hi3660_pcie_phy *hi3660_pcie_phy, u32 val, u32 reg) { - writel(val, kirin_pcie->phy_base + reg); + writel(val, hi3660_pcie_phy->base + reg); } -static inline u32 kirin_apb_phy_readl(struct kirin_pcie *kirin_pcie, u32 reg) +static inline u32 kirin_apb_phy_readl(struct hi3660_pcie_phy *hi3660_pcie_phy, + u32 reg) { - return readl(kirin_pcie->phy_base + reg); + return readl(hi3660_pcie_phy->base + reg); } -static long kirin_pcie_get_clk(struct kirin_pcie *kirin_pcie, - struct platform_device *pdev) +static int hi3660_pcie_phy_get_clk(struct hi3660_pcie_phy *phy) { - struct device *dev = &pdev->dev; + struct device *dev = phy->dev; - kirin_pcie->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref"); - if (IS_ERR(kirin_pcie->phy_ref_clk)) - return PTR_ERR(kirin_pcie->phy_ref_clk); + phy->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref"); + if (IS_ERR(phy->phy_ref_clk)) + return PTR_ERR(phy->phy_ref_clk); - kirin_pcie->pcie_aux_clk = devm_clk_get(dev, "pcie_aux"); - if (IS_ERR(kirin_pcie->pcie_aux_clk)) - return PTR_ERR(kirin_pcie->pcie_aux_clk); + phy->aux_clk = devm_clk_get(dev, "pcie_aux"); + if (IS_ERR(phy->aux_clk)) + return PTR_ERR(phy->aux_clk); - kirin_pcie->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy"); - if (IS_ERR(kirin_pcie->apb_phy_clk)) - return PTR_ERR(kirin_pcie->apb_phy_clk); + phy->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy"); + if (IS_ERR(phy->apb_phy_clk)) + return PTR_ERR(phy->apb_phy_clk); - kirin_pcie->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys"); - if (IS_ERR(kirin_pcie->apb_sys_clk)) - return PTR_ERR(kirin_pcie->apb_sys_clk); + phy->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys"); + if (IS_ERR(phy->apb_sys_clk)) + return PTR_ERR(phy->apb_sys_clk); - kirin_pcie->pcie_aclk = devm_clk_get(dev, "pcie_aclk"); - if (IS_ERR(kirin_pcie->pcie_aclk)) - return PTR_ERR(kirin_pcie->pcie_aclk); + phy->aclk = devm_clk_get(dev, "pcie_aclk"); + if (IS_ERR(phy->aclk)) + return PTR_ERR(phy->aclk); return 0; } -static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, - struct platform_device *pdev) +static int hi3660_pcie_phy_get_resource(struct hi3660_pcie_phy *phy) { - kirin_pcie->apb_base = - devm_platform_ioremap_resource_byname(pdev, "apb"); - if (IS_ERR(kirin_pcie->apb_base)) - return PTR_ERR(kirin_pcie->apb_base); - - kirin_pcie->phy_base = - devm_platform_ioremap_resource_byname(pdev, "phy"); - if (IS_ERR(kirin_pcie->phy_base)) - return PTR_ERR(kirin_pcie->phy_base); - - kirin_pcie->crgctrl = - syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl"); - if (IS_ERR(kirin_pcie->crgctrl)) - return PTR_ERR(kirin_pcie->crgctrl); - - kirin_pcie->sysctrl = - syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl"); - if (IS_ERR(kirin_pcie->sysctrl)) - return PTR_ERR(kirin_pcie->sysctrl); + struct device *dev = phy->dev; + struct platform_device *pdev; + + /* registers */ + pdev = container_of(dev, struct platform_device, dev); + + phy->base = devm_platform_ioremap_resource_byname(pdev, "phy"); + if (IS_ERR(phy->base)) + return PTR_ERR(phy->base); + + phy->crgctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl"); + if (IS_ERR(phy->crgctrl)) + return PTR_ERR(phy->crgctrl); + + phy->sysctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl"); + if (IS_ERR(phy->sysctrl)) + return PTR_ERR(phy->sysctrl); return 0; } -static int kirin_pcie_phy_init(struct kirin_pcie *kirin_pcie) +static int hi3660_pcie_phy_start(struct hi3660_pcie_phy *phy) { - struct device *dev = kirin_pcie->pci->dev; + struct device *dev = phy->dev; u32 reg_val; - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL1); reg_val &= ~PHY_REF_PAD_BIT; - kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); + kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL1); - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL0); + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL0); reg_val &= ~PHY_PWR_DOWN_BIT; - kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL0); + kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL0); usleep_range(TIME_PHY_PD_MIN, TIME_PHY_PD_MAX); - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL1); reg_val &= ~PHY_RST_ACK_BIT; - kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); + kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL1); usleep_range(PIPE_CLK_WAIT_MIN, PIPE_CLK_WAIT_MAX); - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_STATUS0); if (reg_val & PIPE_CLK_STABLE) { dev_err(dev, "PIPE clk is not stable\n"); return -EINVAL; @@ -198,102 +226,274 @@ static int kirin_pcie_phy_init(struct kirin_pcie *kirin_pcie) return 0; } -static void kirin_pcie_oe_enable(struct kirin_pcie *kirin_pcie) +static void hi3660_pcie_phy_oe_enable(struct hi3660_pcie_phy *phy) { u32 val; - regmap_read(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, &val); + regmap_read(phy->sysctrl, SCTRL_PCIE_OE_OFFSET, &val); val |= PCIE_DEBOUNCE_PARAM; val &= ~PCIE_OE_BYPASS; - regmap_write(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, val); + regmap_write(phy->sysctrl, SCTRL_PCIE_OE_OFFSET, val); } -static int kirin_pcie_clk_ctrl(struct kirin_pcie *kirin_pcie, bool enable) +static int hi3660_pcie_phy_clk_ctrl(struct hi3660_pcie_phy *phy, bool enable) { int ret = 0; if (!enable) goto close_clk; - ret = clk_set_rate(kirin_pcie->phy_ref_clk, REF_CLK_FREQ); + ret = clk_set_rate(phy->phy_ref_clk, REF_CLK_FREQ); if (ret) return ret; - ret = clk_prepare_enable(kirin_pcie->phy_ref_clk); + ret = clk_prepare_enable(phy->phy_ref_clk); if (ret) return ret; - ret = clk_prepare_enable(kirin_pcie->apb_sys_clk); + ret = clk_prepare_enable(phy->apb_sys_clk); if (ret) goto apb_sys_fail; - ret = clk_prepare_enable(kirin_pcie->apb_phy_clk); + ret = clk_prepare_enable(phy->apb_phy_clk); if (ret) goto apb_phy_fail; - ret = clk_prepare_enable(kirin_pcie->pcie_aclk); + ret = clk_prepare_enable(phy->aclk); if (ret) goto aclk_fail; - ret = clk_prepare_enable(kirin_pcie->pcie_aux_clk); + ret = clk_prepare_enable(phy->aux_clk); if (ret) goto aux_clk_fail; return 0; close_clk: - clk_disable_unprepare(kirin_pcie->pcie_aux_clk); + clk_disable_unprepare(phy->aux_clk); aux_clk_fail: - clk_disable_unprepare(kirin_pcie->pcie_aclk); + clk_disable_unprepare(phy->aclk); aclk_fail: - clk_disable_unprepare(kirin_pcie->apb_phy_clk); + clk_disable_unprepare(phy->apb_phy_clk); apb_phy_fail: - clk_disable_unprepare(kirin_pcie->apb_sys_clk); + clk_disable_unprepare(phy->apb_sys_clk); apb_sys_fail: - clk_disable_unprepare(kirin_pcie->phy_ref_clk); + clk_disable_unprepare(phy->phy_ref_clk); return ret; } -static int kirin_pcie_power_on(struct kirin_pcie *kirin_pcie) +static int hi3660_pcie_phy_power_on(struct kirin_pcie *pcie) { + struct hi3660_pcie_phy *phy = pcie->phy_priv; int ret; /* Power supply for Host */ - regmap_write(kirin_pcie->sysctrl, + regmap_write(phy->sysctrl, SCTRL_PCIE_CMOS_OFFSET, SCTRL_PCIE_CMOS_BIT); usleep_range(TIME_CMOS_MIN, TIME_CMOS_MAX); - kirin_pcie_oe_enable(kirin_pcie); - ret = kirin_pcie_clk_ctrl(kirin_pcie, true); + hi3660_pcie_phy_oe_enable(phy); + + ret = hi3660_pcie_phy_clk_ctrl(phy, true); if (ret) return ret; /* ISO disable, PCIeCtrl, PHY assert and clk gate clear */ - regmap_write(kirin_pcie->sysctrl, + regmap_write(phy->sysctrl, SCTRL_PCIE_ISO_OFFSET, SCTRL_PCIE_ISO_BIT); - regmap_write(kirin_pcie->crgctrl, + regmap_write(phy->crgctrl, CRGCTRL_PCIE_ASSERT_OFFSET, CRGCTRL_PCIE_ASSERT_BIT); - regmap_write(kirin_pcie->sysctrl, + regmap_write(phy->sysctrl, SCTRL_PCIE_HPCLK_OFFSET, SCTRL_PCIE_HPCLK_BIT); - ret = kirin_pcie_phy_init(kirin_pcie); + ret = hi3660_pcie_phy_start(phy); if (ret) - goto close_clk; + goto disable_clks; - /* perst assert Endpoint */ - if (!gpio_request(kirin_pcie->gpio_id_reset, "pcie_perst")) { - usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX); - ret = gpio_direction_output(kirin_pcie->gpio_id_reset, 1); - if (ret) - goto close_clk; - usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); + return 0; + +disable_clks: + hi3660_pcie_phy_clk_ctrl(phy, false); + return ret; +} + +static int hi3660_pcie_phy_init(struct platform_device *pdev, + struct kirin_pcie *pcie) +{ + struct device *dev = &pdev->dev; + struct hi3660_pcie_phy *phy; + int ret; + phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL); + if (!phy) + return -ENOMEM; + + pcie->phy_priv = phy; + phy->dev = dev; + + /* registers */ + pdev = container_of(dev, struct platform_device, dev); + + ret = hi3660_pcie_phy_get_clk(phy); + if (ret) + return ret; + + return hi3660_pcie_phy_get_resource(phy); +} + +static int hi3660_pcie_phy_power_off(struct kirin_pcie *pcie) +{ + struct hi3660_pcie_phy *phy = pcie->phy_priv; + + /* Drop power supply for Host */ + regmap_write(phy->sysctrl, SCTRL_PCIE_CMOS_OFFSET, 0x00); + + hi3660_pcie_phy_clk_ctrl(phy, false); + + return 0; +} + +/* + * The non-PHY part starts here + */ + +static const struct regmap_config pcie_kirin_regmap_conf = { + .name = "kirin_pcie_apb", + .reg_bits = 32, + .val_bits = 32, + .reg_stride = 4, +}; + +static int kirin_pcie_get_gpio_enable(struct kirin_pcie *pcie, + struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device_node *np = dev->of_node; + char name[32]; + int ret, i; + + /* This is an optional property */ + ret = of_gpio_named_count(np, "hisilicon,clken-gpios"); + if (ret < 0) return 0; + + if (ret > MAX_PCI_SLOTS) { + dev_err(dev, "Too many GPIO clock requests!\n"); + return -EINVAL; } -close_clk: - kirin_pcie_clk_ctrl(kirin_pcie, false); + pcie->n_gpio_clkreq = ret; + + for (i = 0; i < pcie->n_gpio_clkreq; i++) { + pcie->gpio_id_clkreq[i] = of_get_named_gpio(dev->of_node, + "hisilicon,clken-gpios", i); + if (pcie->gpio_id_clkreq[i] < 0) + return pcie->gpio_id_clkreq[i]; + + sprintf(name, "pcie_clkreq_%d", i); + pcie->clkreq_names[i] = devm_kstrdup_const(dev, name, + GFP_KERNEL); + if (!pcie->clkreq_names[i]) + return -ENOMEM; + } + + return 0; +} + +static int kirin_pcie_parse_port(struct kirin_pcie *pcie, + struct platform_device *pdev, + struct device_node *node) +{ + struct device *dev = &pdev->dev; + struct device_node *parent, *child; + int ret, slot, i; + char name[32]; + + for_each_available_child_of_node(node, parent) { + for_each_available_child_of_node(parent, child) { + i = pcie->num_slots; + + pcie->gpio_id_reset[i] = of_get_named_gpio(child, + "reset-gpios", 0); + if (pcie->gpio_id_reset[i] < 0) + continue; + + pcie->num_slots++; + if (pcie->num_slots > MAX_PCI_SLOTS) { + dev_err(dev, "Too many PCI slots!\n"); + ret = -EINVAL; + goto put_node; + } + + ret = of_pci_get_devfn(child); + if (ret < 0) { + dev_err(dev, "failed to parse devfn: %d\n", ret); + goto put_node; + } + + slot = PCI_SLOT(ret); + + sprintf(name, "pcie_perst_%d", slot); + pcie->reset_names[i] = devm_kstrdup_const(dev, name, + GFP_KERNEL); + if (!pcie->reset_names[i]) { + ret = -ENOMEM; + goto put_node; + } + } + } + + return 0; + +put_node: + of_node_put(child); + of_node_put(parent); + return ret; +} + +static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, + struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device_node *child, *node = dev->of_node; + void __iomem *apb_base; + int ret; + + apb_base = devm_platform_ioremap_resource_byname(pdev, "apb"); + if (IS_ERR(apb_base)) + return PTR_ERR(apb_base); + + kirin_pcie->apb = devm_regmap_init_mmio(dev, apb_base, + &pcie_kirin_regmap_conf); + if (IS_ERR(kirin_pcie->apb)) + return PTR_ERR(kirin_pcie->apb); + + /* pcie internal PERST# gpio */ + kirin_pcie->gpio_id_dwc_perst = of_get_named_gpio(dev->of_node, + "reset-gpios", 0); + if (kirin_pcie->gpio_id_dwc_perst == -EPROBE_DEFER) { + return -EPROBE_DEFER; + } else if (!gpio_is_valid(kirin_pcie->gpio_id_dwc_perst)) { + dev_err(dev, "unable to get a valid gpio pin\n"); + return -ENODEV; + } + + ret = kirin_pcie_get_gpio_enable(kirin_pcie, pdev); + if (ret) + return ret; + + /* Parse OF children */ + for_each_available_child_of_node(node, child) { + ret = kirin_pcie_parse_port(kirin_pcie, pdev, child); + if (ret) + goto put_node; + } + + return 0; + +put_node: + of_node_put(child); return ret; } @@ -302,13 +502,13 @@ static void kirin_pcie_sideband_dbi_w_mode(struct kirin_pcie *kirin_pcie, { u32 val; - val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL0_ADDR); + regmap_read(kirin_pcie->apb, SOC_PCIECTRL_CTRL0_ADDR, &val); if (on) val = val | PCIE_ELBI_SLV_DBI_ENABLE; else val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; - kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL0_ADDR); + regmap_write(kirin_pcie->apb, SOC_PCIECTRL_CTRL0_ADDR, val); } static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie, @@ -316,13 +516,13 @@ static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie, { u32 val; - val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL1_ADDR); + regmap_read(kirin_pcie->apb, SOC_PCIECTRL_CTRL1_ADDR, &val); if (on) val = val | PCIE_ELBI_SLV_DBI_ENABLE; else val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; - kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL1_ADDR); + regmap_write(kirin_pcie->apb, SOC_PCIECTRL_CTRL1_ADDR, val); } static int kirin_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn, @@ -351,9 +551,32 @@ static int kirin_pcie_wr_own_conf(struct pci_bus *bus, unsigned int devfn, return PCIBIOS_SUCCESSFUL; } +static int kirin_pcie_add_bus(struct pci_bus *bus) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); + int i, ret; + + if (!kirin_pcie->num_slots) + return 0; + + /* Send PERST# to each slot */ + for (i = 0; i < kirin_pcie->num_slots; i++) { + ret = gpio_direction_output(kirin_pcie->gpio_id_reset[i], 1); + if (ret) { + dev_err(pci->dev, "PERST# %s error: %d\n", + kirin_pcie->reset_names[i], ret); + } + } + usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); + + return 0; +} + static struct pci_ops kirin_pci_ops = { .read = kirin_pcie_rd_own_conf, .write = kirin_pcie_wr_own_conf, + .add_bus = kirin_pcie_add_bus, }; static u32 kirin_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, @@ -382,8 +605,9 @@ static void kirin_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, static int kirin_pcie_link_up(struct dw_pcie *pci) { struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); - u32 val = kirin_apb_ctrl_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); + u32 val; + regmap_read(kirin_pcie->apb, PCIE_APB_PHY_STATUS0, &val); if ((val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE) return 1; @@ -395,8 +619,8 @@ static int kirin_pcie_start_link(struct dw_pcie *pci) struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); /* assert LTSSM enable */ - kirin_apb_ctrl_writel(kirin_pcie, PCIE_LTSSM_ENABLE_BIT, - PCIE_APP_LTSSM_ENABLE); + regmap_write(kirin_pcie->apb, PCIE_APP_LTSSM_ENABLE, + PCIE_LTSSM_ENABLE_BIT); return 0; } @@ -408,6 +632,44 @@ static int kirin_pcie_host_init(struct pcie_port *pp) return 0; } +static int kirin_pcie_gpio_request(struct kirin_pcie *kirin_pcie, + struct device *dev) +{ + int ret, i; + + for (i = 0; i < kirin_pcie->num_slots; i++) { + if (!gpio_is_valid(kirin_pcie->gpio_id_reset[i])) { + dev_err(dev, "unable to get a valid %s gpio\n", + kirin_pcie->reset_names[i]); + return -ENODEV; + } + + ret = devm_gpio_request(dev, kirin_pcie->gpio_id_reset[i], + kirin_pcie->reset_names[i]); + if (ret) + return ret; + } + + for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) { + if (!gpio_is_valid(kirin_pcie->gpio_id_clkreq[i])) { + dev_err(dev, "unable to get a valid %s gpio\n", + kirin_pcie->clkreq_names[i]); + return -ENODEV; + } + + ret = devm_gpio_request(dev, kirin_pcie->gpio_id_clkreq[i], + kirin_pcie->clkreq_names[i]); + if (ret) + return ret; + + ret = gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 0); + if (ret) + return ret; + } + + return 0; +} + static const struct dw_pcie_ops kirin_dw_pcie_ops = { .read_dbi = kirin_pcie_read_dbi, .write_dbi = kirin_pcie_write_dbi, @@ -419,8 +681,99 @@ static const struct dw_pcie_host_ops kirin_pcie_host_ops = { .host_init = kirin_pcie_host_init, }; +static int kirin_pcie_power_off(struct kirin_pcie *kirin_pcie) +{ + int i; + + if (kirin_pcie->type == PCIE_KIRIN_INTERNAL_PHY) + return hi3660_pcie_phy_power_off(kirin_pcie); + + for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) + gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 1); + + phy_power_off(kirin_pcie->phy); + phy_exit(kirin_pcie->phy); + + return 0; +} + +static int kirin_pcie_power_on(struct platform_device *pdev, + struct kirin_pcie *kirin_pcie) +{ + struct device *dev = &pdev->dev; + int ret; + + if (kirin_pcie->type == PCIE_KIRIN_INTERNAL_PHY) { + ret = hi3660_pcie_phy_init(pdev, kirin_pcie); + if (ret) + return ret; + + ret = hi3660_pcie_phy_power_on(kirin_pcie); + if (ret) + return ret; + } else { + kirin_pcie->phy = devm_of_phy_get(dev, dev->of_node, NULL); + if (IS_ERR(kirin_pcie->phy)) + return PTR_ERR(kirin_pcie->phy); + + ret = kirin_pcie_gpio_request(kirin_pcie, dev); + if (ret) + return ret; + + ret = phy_init(kirin_pcie->phy); + if (ret) + goto err; + + ret = phy_power_on(kirin_pcie->phy); + if (ret) + goto err; + } + + /* perst assert Endpoint */ + usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX); + + if (!gpio_request(kirin_pcie->gpio_id_dwc_perst, "pcie_perst_bridge")) { + ret = gpio_direction_output(kirin_pcie->gpio_id_dwc_perst, 1); + if (ret) + goto err; + } + + usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); + + return 0; +err: + kirin_pcie_power_off(kirin_pcie); + + return ret; +} + +static int __exit kirin_pcie_remove(struct platform_device *pdev) +{ + struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev); + + dw_pcie_host_deinit(&kirin_pcie->pci->pp); + + kirin_pcie_power_off(kirin_pcie); + + return 0; +} + +static const struct of_device_id kirin_pcie_match[] = { + { + .compatible = "hisilicon,kirin960-pcie", + .data = (void *)PCIE_KIRIN_INTERNAL_PHY + }, + { + .compatible = "hisilicon,kirin970-pcie", + .data = (void *)PCIE_KIRIN_EXTERNAL_PHY + }, + {}, +}; + static int kirin_pcie_probe(struct platform_device *pdev) { + enum pcie_kirin_phy_type phy_type; + const struct of_device_id *of_id; struct device *dev = &pdev->dev; struct kirin_pcie *kirin_pcie; struct dw_pcie *pci; @@ -431,6 +784,14 @@ static int kirin_pcie_probe(struct platform_device *pdev) return -EINVAL; } + of_id = of_match_device(kirin_pcie_match, dev); + if (!of_id) { + dev_err(dev, "OF data missing\n"); + return -EINVAL; + } + + phy_type = (long)of_id->data; + kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL); if (!kirin_pcie) return -ENOMEM; @@ -443,44 +804,33 @@ static int kirin_pcie_probe(struct platform_device *pdev) pci->ops = &kirin_dw_pcie_ops; pci->pp.ops = &kirin_pcie_host_ops; kirin_pcie->pci = pci; - - ret = kirin_pcie_get_clk(kirin_pcie, pdev); - if (ret) - return ret; + kirin_pcie->type = phy_type; ret = kirin_pcie_get_resource(kirin_pcie, pdev); if (ret) return ret; - kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node, - "reset-gpios", 0); - if (kirin_pcie->gpio_id_reset == -EPROBE_DEFER) { - return -EPROBE_DEFER; - } else if (!gpio_is_valid(kirin_pcie->gpio_id_reset)) { - dev_err(dev, "unable to get a valid gpio pin\n"); - return -ENODEV; - } + platform_set_drvdata(pdev, kirin_pcie); - ret = kirin_pcie_power_on(kirin_pcie); + ret = kirin_pcie_power_on(pdev, kirin_pcie); if (ret) return ret; - platform_set_drvdata(pdev, kirin_pcie); - return dw_pcie_host_init(&pci->pp); } -static const struct of_device_id kirin_pcie_match[] = { - { .compatible = "hisilicon,kirin960-pcie" }, - {}, -}; - static struct platform_driver kirin_pcie_driver = { .probe = kirin_pcie_probe, + .remove = __exit_p(kirin_pcie_remove), .driver = { .name = "kirin-pcie", - .of_match_table = kirin_pcie_match, - .suppress_bind_attrs = true, + .of_match_table = kirin_pcie_match, + .suppress_bind_attrs = true, }, }; -builtin_platform_driver(kirin_pcie_driver); +module_platform_driver(kirin_pcie_driver); + +MODULE_DEVICE_TABLE(of, kirin_pcie_match); +MODULE_DESCRIPTION("PCIe host controller driver for Kirin Phone SoCs"); +MODULE_AUTHOR("Xiaowei Song <songxiaowei@huawei.com>"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c new file mode 100644 index 000000000000..7b17da2f9b3f --- /dev/null +++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c @@ -0,0 +1,721 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Qualcomm PCIe Endpoint controller driver + * + * Copyright (c) 2020, The Linux Foundation. All rights reserved. + * Author: Siddartha Mohanadoss <smohanad@codeaurora.org + * + * Copyright (c) 2021, Linaro Ltd. + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org + */ + +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/gpio/consumer.h> +#include <linux/mfd/syscon.h> +#include <linux/phy/phy.h> +#include <linux/platform_device.h> +#include <linux/pm_domain.h> +#include <linux/regmap.h> +#include <linux/reset.h> + +#include "pcie-designware.h" + +/* PARF registers */ +#define PARF_SYS_CTRL 0x00 +#define PARF_DB_CTRL 0x10 +#define PARF_PM_CTRL 0x20 +#define PARF_MHI_BASE_ADDR_LOWER 0x178 +#define PARF_MHI_BASE_ADDR_UPPER 0x17c +#define PARF_DEBUG_INT_EN 0x190 +#define PARF_AXI_MSTR_RD_HALT_NO_WRITES 0x1a4 +#define PARF_AXI_MSTR_WR_ADDR_HALT 0x1a8 +#define PARF_Q2A_FLUSH 0x1ac +#define PARF_LTSSM 0x1b0 +#define PARF_CFG_BITS 0x210 +#define PARF_INT_ALL_STATUS 0x224 +#define PARF_INT_ALL_CLEAR 0x228 +#define PARF_INT_ALL_MASK 0x22c +#define PARF_SLV_ADDR_MSB_CTRL 0x2c0 +#define PARF_DBI_BASE_ADDR 0x350 +#define PARF_DBI_BASE_ADDR_HI 0x354 +#define PARF_SLV_ADDR_SPACE_SIZE 0x358 +#define PARF_SLV_ADDR_SPACE_SIZE_HI 0x35c +#define PARF_ATU_BASE_ADDR 0x634 +#define PARF_ATU_BASE_ADDR_HI 0x638 +#define PARF_SRIS_MODE 0x644 +#define PARF_DEVICE_TYPE 0x1000 +#define PARF_BDF_TO_SID_CFG 0x2c00 + +/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ +#define PARF_INT_ALL_LINK_DOWN BIT(1) +#define PARF_INT_ALL_BME BIT(2) +#define PARF_INT_ALL_PM_TURNOFF BIT(3) +#define PARF_INT_ALL_DEBUG BIT(4) +#define PARF_INT_ALL_LTR BIT(5) +#define PARF_INT_ALL_MHI_Q6 BIT(6) +#define PARF_INT_ALL_MHI_A7 BIT(7) +#define PARF_INT_ALL_DSTATE_CHANGE BIT(8) +#define PARF_INT_ALL_L1SUB_TIMEOUT BIT(9) +#define PARF_INT_ALL_MMIO_WRITE BIT(10) +#define PARF_INT_ALL_CFG_WRITE BIT(11) +#define PARF_INT_ALL_BRIDGE_FLUSH_N BIT(12) +#define PARF_INT_ALL_LINK_UP BIT(13) +#define PARF_INT_ALL_AER_LEGACY BIT(14) +#define PARF_INT_ALL_PLS_ERR BIT(15) +#define PARF_INT_ALL_PME_LEGACY BIT(16) +#define PARF_INT_ALL_PLS_PME BIT(17) + +/* PARF_BDF_TO_SID_CFG register fields */ +#define PARF_BDF_TO_SID_BYPASS BIT(0) + +/* PARF_DEBUG_INT_EN register fields */ +#define PARF_DEBUG_INT_PM_DSTATE_CHANGE BIT(1) +#define PARF_DEBUG_INT_CFG_BUS_MASTER_EN BIT(2) +#define PARF_DEBUG_INT_RADM_PM_TURNOFF BIT(3) + +/* PARF_DEVICE_TYPE register fields */ +#define PARF_DEVICE_TYPE_EP 0x0 + +/* PARF_PM_CTRL register fields */ +#define PARF_PM_CTRL_REQ_EXIT_L1 BIT(1) +#define PARF_PM_CTRL_READY_ENTR_L23 BIT(2) +#define PARF_PM_CTRL_REQ_NOT_ENTR_L1 BIT(5) + +/* PARF_AXI_MSTR_RD_HALT_NO_WRITES register fields */ +#define PARF_AXI_MSTR_RD_HALT_NO_WRITE_EN BIT(0) + +/* PARF_AXI_MSTR_WR_ADDR_HALT register fields */ +#define PARF_AXI_MSTR_WR_ADDR_HALT_EN BIT(31) + +/* PARF_Q2A_FLUSH register fields */ +#define PARF_Q2A_FLUSH_EN BIT(16) + +/* PARF_SYS_CTRL register fields */ +#define PARF_SYS_CTRL_AUX_PWR_DET BIT(4) +#define PARF_SYS_CTRL_CORE_CLK_CGC_DIS BIT(6) +#define PARF_SYS_CTRL_SLV_DBI_WAKE_DISABLE BIT(11) + +/* PARF_DB_CTRL register fields */ +#define PARF_DB_CTRL_INSR_DBNCR_BLOCK BIT(0) +#define PARF_DB_CTRL_RMVL_DBNCR_BLOCK BIT(1) +#define PARF_DB_CTRL_DBI_WKP_BLOCK BIT(4) +#define PARF_DB_CTRL_SLV_WKP_BLOCK BIT(5) +#define PARF_DB_CTRL_MST_WKP_BLOCK BIT(6) + +/* PARF_CFG_BITS register fields */ +#define PARF_CFG_BITS_REQ_EXIT_L1SS_MSI_LTR_EN BIT(1) + +/* ELBI registers */ +#define ELBI_SYS_STTS 0x08 + +/* DBI registers */ +#define DBI_CON_STATUS 0x44 + +/* DBI register fields */ +#define DBI_CON_STATUS_POWER_STATE_MASK GENMASK(1, 0) + +#define XMLH_LINK_UP 0x400 +#define CORE_RESET_TIME_US_MIN 1000 +#define CORE_RESET_TIME_US_MAX 1005 +#define WAKE_DELAY_US 2000 /* 2 ms */ + +#define to_pcie_ep(x) dev_get_drvdata((x)->dev) + +enum qcom_pcie_ep_link_status { + QCOM_PCIE_EP_LINK_DISABLED, + QCOM_PCIE_EP_LINK_ENABLED, + QCOM_PCIE_EP_LINK_UP, + QCOM_PCIE_EP_LINK_DOWN, +}; + +static struct clk_bulk_data qcom_pcie_ep_clks[] = { + { .id = "cfg" }, + { .id = "aux" }, + { .id = "bus_master" }, + { .id = "bus_slave" }, + { .id = "ref" }, + { .id = "sleep" }, + { .id = "slave_q2a" }, +}; + +struct qcom_pcie_ep { + struct dw_pcie pci; + + void __iomem *parf; + void __iomem *elbi; + struct regmap *perst_map; + struct resource *mmio_res; + + struct reset_control *core_reset; + struct gpio_desc *reset; + struct gpio_desc *wake; + struct phy *phy; + + u32 perst_en; + u32 perst_sep_en; + + enum qcom_pcie_ep_link_status link_status; + int global_irq; + int perst_irq; +}; + +static int qcom_pcie_ep_core_reset(struct qcom_pcie_ep *pcie_ep) +{ + struct dw_pcie *pci = &pcie_ep->pci; + struct device *dev = pci->dev; + int ret; + + ret = reset_control_assert(pcie_ep->core_reset); + if (ret) { + dev_err(dev, "Cannot assert core reset\n"); + return ret; + } + + usleep_range(CORE_RESET_TIME_US_MIN, CORE_RESET_TIME_US_MAX); + + ret = reset_control_deassert(pcie_ep->core_reset); + if (ret) { + dev_err(dev, "Cannot de-assert core reset\n"); + return ret; + } + + usleep_range(CORE_RESET_TIME_US_MIN, CORE_RESET_TIME_US_MAX); + + return 0; +} + +/* + * Delatch PERST_EN and PERST_SEPARATION_ENABLE with TCSR to avoid + * device reset during host reboot and hibernation. The driver is + * expected to handle this situation. + */ +static void qcom_pcie_ep_configure_tcsr(struct qcom_pcie_ep *pcie_ep) +{ + regmap_write(pcie_ep->perst_map, pcie_ep->perst_en, 0); + regmap_write(pcie_ep->perst_map, pcie_ep->perst_sep_en, 0); +} + +static int qcom_pcie_dw_link_up(struct dw_pcie *pci) +{ + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); + u32 reg; + + reg = readl_relaxed(pcie_ep->elbi + ELBI_SYS_STTS); + + return reg & XMLH_LINK_UP; +} + +static int qcom_pcie_dw_start_link(struct dw_pcie *pci) +{ + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); + + enable_irq(pcie_ep->perst_irq); + + return 0; +} + +static void qcom_pcie_dw_stop_link(struct dw_pcie *pci) +{ + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); + + disable_irq(pcie_ep->perst_irq); +} + +static int qcom_pcie_perst_deassert(struct dw_pcie *pci) +{ + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); + struct device *dev = pci->dev; + u32 val, offset; + int ret; + + ret = clk_bulk_prepare_enable(ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + if (ret) + return ret; + + ret = qcom_pcie_ep_core_reset(pcie_ep); + if (ret) + goto err_disable_clk; + + ret = phy_init(pcie_ep->phy); + if (ret) + goto err_disable_clk; + + ret = phy_power_on(pcie_ep->phy); + if (ret) + goto err_phy_exit; + + /* Assert WAKE# to RC to indicate device is ready */ + gpiod_set_value_cansleep(pcie_ep->wake, 1); + usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500); + gpiod_set_value_cansleep(pcie_ep->wake, 0); + + qcom_pcie_ep_configure_tcsr(pcie_ep); + + /* Disable BDF to SID mapping */ + val = readl_relaxed(pcie_ep->parf + PARF_BDF_TO_SID_CFG); + val |= PARF_BDF_TO_SID_BYPASS; + writel_relaxed(val, pcie_ep->parf + PARF_BDF_TO_SID_CFG); + + /* Enable debug IRQ */ + val = readl_relaxed(pcie_ep->parf + PARF_DEBUG_INT_EN); + val |= PARF_DEBUG_INT_RADM_PM_TURNOFF | + PARF_DEBUG_INT_CFG_BUS_MASTER_EN | + PARF_DEBUG_INT_PM_DSTATE_CHANGE; + writel_relaxed(val, pcie_ep->parf + PARF_DEBUG_INT_EN); + + /* Configure PCIe to endpoint mode */ + writel_relaxed(PARF_DEVICE_TYPE_EP, pcie_ep->parf + PARF_DEVICE_TYPE); + + /* Allow entering L1 state */ + val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); + val &= ~PARF_PM_CTRL_REQ_NOT_ENTR_L1; + writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL); + + /* Read halts write */ + val = readl_relaxed(pcie_ep->parf + PARF_AXI_MSTR_RD_HALT_NO_WRITES); + val &= ~PARF_AXI_MSTR_RD_HALT_NO_WRITE_EN; + writel_relaxed(val, pcie_ep->parf + PARF_AXI_MSTR_RD_HALT_NO_WRITES); + + /* Write after write halt */ + val = readl_relaxed(pcie_ep->parf + PARF_AXI_MSTR_WR_ADDR_HALT); + val |= PARF_AXI_MSTR_WR_ADDR_HALT_EN; + writel_relaxed(val, pcie_ep->parf + PARF_AXI_MSTR_WR_ADDR_HALT); + + /* Q2A flush disable */ + val = readl_relaxed(pcie_ep->parf + PARF_Q2A_FLUSH); + val &= ~PARF_Q2A_FLUSH_EN; + writel_relaxed(val, pcie_ep->parf + PARF_Q2A_FLUSH); + + /* Disable DBI Wakeup, core clock CGC and enable AUX power */ + val = readl_relaxed(pcie_ep->parf + PARF_SYS_CTRL); + val |= PARF_SYS_CTRL_SLV_DBI_WAKE_DISABLE | + PARF_SYS_CTRL_CORE_CLK_CGC_DIS | + PARF_SYS_CTRL_AUX_PWR_DET; + writel_relaxed(val, pcie_ep->parf + PARF_SYS_CTRL); + + /* Disable the debouncers */ + val = readl_relaxed(pcie_ep->parf + PARF_DB_CTRL); + val |= PARF_DB_CTRL_INSR_DBNCR_BLOCK | PARF_DB_CTRL_RMVL_DBNCR_BLOCK | + PARF_DB_CTRL_DBI_WKP_BLOCK | PARF_DB_CTRL_SLV_WKP_BLOCK | + PARF_DB_CTRL_MST_WKP_BLOCK; + writel_relaxed(val, pcie_ep->parf + PARF_DB_CTRL); + + /* Request to exit from L1SS for MSI and LTR MSG */ + val = readl_relaxed(pcie_ep->parf + PARF_CFG_BITS); + val |= PARF_CFG_BITS_REQ_EXIT_L1SS_MSI_LTR_EN; + writel_relaxed(val, pcie_ep->parf + PARF_CFG_BITS); + + dw_pcie_dbi_ro_wr_en(pci); + + /* Set the L0s Exit Latency to 2us-4us = 0x6 */ + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); + val &= ~PCI_EXP_LNKCAP_L0SEL; + val |= FIELD_PREP(PCI_EXP_LNKCAP_L0SEL, 0x6); + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, val); + + /* Set the L1 Exit Latency to be 32us-64 us = 0x6 */ + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); + val &= ~PCI_EXP_LNKCAP_L1EL; + val |= FIELD_PREP(PCI_EXP_LNKCAP_L1EL, 0x6); + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, val); + + dw_pcie_dbi_ro_wr_dis(pci); + + writel_relaxed(0, pcie_ep->parf + PARF_INT_ALL_MASK); + val = PARF_INT_ALL_LINK_DOWN | PARF_INT_ALL_BME | + PARF_INT_ALL_PM_TURNOFF | PARF_INT_ALL_DSTATE_CHANGE | + PARF_INT_ALL_LINK_UP; + writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK); + + ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep); + if (ret) { + dev_err(dev, "Failed to complete initialization: %d\n", ret); + goto err_phy_power_off; + } + + /* + * The physical address of the MMIO region which is exposed as the BAR + * should be written to MHI BASE registers. + */ + writel_relaxed(pcie_ep->mmio_res->start, + pcie_ep->parf + PARF_MHI_BASE_ADDR_LOWER); + writel_relaxed(0, pcie_ep->parf + PARF_MHI_BASE_ADDR_UPPER); + + dw_pcie_ep_init_notify(&pcie_ep->pci.ep); + + /* Enable LTSSM */ + val = readl_relaxed(pcie_ep->parf + PARF_LTSSM); + val |= BIT(8); + writel_relaxed(val, pcie_ep->parf + PARF_LTSSM); + + return 0; + +err_phy_power_off: + phy_power_off(pcie_ep->phy); +err_phy_exit: + phy_exit(pcie_ep->phy); +err_disable_clk: + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + + return ret; +} + +static void qcom_pcie_perst_assert(struct dw_pcie *pci) +{ + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); + struct device *dev = pci->dev; + + if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) { + dev_dbg(dev, "Link is already disabled\n"); + return; + } + + phy_power_off(pcie_ep->phy); + phy_exit(pcie_ep->phy); + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED; +} + +/* Common DWC controller ops */ +static const struct dw_pcie_ops pci_ops = { + .link_up = qcom_pcie_dw_link_up, + .start_link = qcom_pcie_dw_start_link, + .stop_link = qcom_pcie_dw_stop_link, +}; + +static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev, + struct qcom_pcie_ep *pcie_ep) +{ + struct device *dev = &pdev->dev; + struct dw_pcie *pci = &pcie_ep->pci; + struct device_node *syscon; + struct resource *res; + int ret; + + pcie_ep->parf = devm_platform_ioremap_resource_byname(pdev, "parf"); + if (IS_ERR(pcie_ep->parf)) + return PTR_ERR(pcie_ep->parf); + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); + pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); + if (IS_ERR(pci->dbi_base)) + return PTR_ERR(pci->dbi_base); + pci->dbi_base2 = pci->dbi_base; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); + pcie_ep->elbi = devm_pci_remap_cfg_resource(dev, res); + if (IS_ERR(pcie_ep->elbi)) + return PTR_ERR(pcie_ep->elbi); + + pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, + "mmio"); + + syscon = of_parse_phandle(dev->of_node, "qcom,perst-regs", 0); + if (!syscon) { + dev_err(dev, "Failed to parse qcom,perst-regs\n"); + return -EINVAL; + } + + pcie_ep->perst_map = syscon_node_to_regmap(syscon); + of_node_put(syscon); + if (IS_ERR(pcie_ep->perst_map)) + return PTR_ERR(pcie_ep->perst_map); + + ret = of_property_read_u32_index(dev->of_node, "qcom,perst-regs", + 1, &pcie_ep->perst_en); + if (ret < 0) { + dev_err(dev, "No Perst Enable offset in syscon\n"); + return ret; + } + + ret = of_property_read_u32_index(dev->of_node, "qcom,perst-regs", + 2, &pcie_ep->perst_sep_en); + if (ret < 0) { + dev_err(dev, "No Perst Separation Enable offset in syscon\n"); + return ret; + } + + return 0; +} + +static int qcom_pcie_ep_get_resources(struct platform_device *pdev, + struct qcom_pcie_ep *pcie_ep) +{ + struct device *dev = &pdev->dev; + int ret; + + ret = qcom_pcie_ep_get_io_resources(pdev, pcie_ep); + if (ret) { + dev_err(&pdev->dev, "Failed to get io resources %d\n", ret); + return ret; + } + + ret = devm_clk_bulk_get(dev, ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + if (ret) + return ret; + + pcie_ep->core_reset = devm_reset_control_get_exclusive(dev, "core"); + if (IS_ERR(pcie_ep->core_reset)) + return PTR_ERR(pcie_ep->core_reset); + + pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN); + if (IS_ERR(pcie_ep->reset)) + return PTR_ERR(pcie_ep->reset); + + pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW); + if (IS_ERR(pcie_ep->wake)) + return PTR_ERR(pcie_ep->wake); + + pcie_ep->phy = devm_phy_optional_get(&pdev->dev, "pciephy"); + if (IS_ERR(pcie_ep->phy)) + ret = PTR_ERR(pcie_ep->phy); + + return ret; +} + +/* TODO: Notify clients about PCIe state change */ +static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data) +{ + struct qcom_pcie_ep *pcie_ep = data; + struct dw_pcie *pci = &pcie_ep->pci; + struct device *dev = pci->dev; + u32 status = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_STATUS); + u32 mask = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_MASK); + u32 dstate, val; + + writel_relaxed(status, pcie_ep->parf + PARF_INT_ALL_CLEAR); + status &= mask; + + if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) { + dev_dbg(dev, "Received Linkdown event\n"); + pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN; + } else if (FIELD_GET(PARF_INT_ALL_BME, status)) { + dev_dbg(dev, "Received BME event. Link is enabled!\n"); + pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED; + } else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) { + dev_dbg(dev, "Received PM Turn-off event! Entering L23\n"); + val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); + val |= PARF_PM_CTRL_READY_ENTR_L23; + writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL); + } else if (FIELD_GET(PARF_INT_ALL_DSTATE_CHANGE, status)) { + dstate = dw_pcie_readl_dbi(pci, DBI_CON_STATUS) & + DBI_CON_STATUS_POWER_STATE_MASK; + dev_dbg(dev, "Received D%d state event\n", dstate); + if (dstate == 3) { + val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); + val |= PARF_PM_CTRL_REQ_EXIT_L1; + writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL); + } + } else if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) { + dev_dbg(dev, "Received Linkup event. Enumeration complete!\n"); + dw_pcie_ep_linkup(&pci->ep); + pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP; + } else { + dev_dbg(dev, "Received unknown event: %d\n", status); + } + + return IRQ_HANDLED; +} + +static irqreturn_t qcom_pcie_ep_perst_irq_thread(int irq, void *data) +{ + struct qcom_pcie_ep *pcie_ep = data; + struct dw_pcie *pci = &pcie_ep->pci; + struct device *dev = pci->dev; + u32 perst; + + perst = gpiod_get_value(pcie_ep->reset); + if (perst) { + dev_dbg(dev, "PERST asserted by host. Shutting down the PCIe link!\n"); + qcom_pcie_perst_assert(pci); + } else { + dev_dbg(dev, "PERST de-asserted by host. Starting link training!\n"); + qcom_pcie_perst_deassert(pci); + } + + irq_set_irq_type(gpiod_to_irq(pcie_ep->reset), + (perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW)); + + return IRQ_HANDLED; +} + +static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev, + struct qcom_pcie_ep *pcie_ep) +{ + int irq, ret; + + irq = platform_get_irq_byname(pdev, "global"); + if (irq < 0) { + dev_err(&pdev->dev, "Failed to get Global IRQ\n"); + return irq; + } + + ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, + qcom_pcie_ep_global_irq_thread, + IRQF_ONESHOT, + "global_irq", pcie_ep); + if (ret) { + dev_err(&pdev->dev, "Failed to request Global IRQ\n"); + return ret; + } + + pcie_ep->perst_irq = gpiod_to_irq(pcie_ep->reset); + irq_set_status_flags(pcie_ep->perst_irq, IRQ_NOAUTOEN); + ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->perst_irq, NULL, + qcom_pcie_ep_perst_irq_thread, + IRQF_TRIGGER_HIGH | IRQF_ONESHOT, + "perst_irq", pcie_ep); + if (ret) { + dev_err(&pdev->dev, "Failed to request PERST IRQ\n"); + disable_irq(irq); + return ret; + } + + return 0; +} + +static int qcom_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, + enum pci_epc_irq_type type, u16 interrupt_num) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + + switch (type) { + case PCI_EPC_IRQ_LEGACY: + return dw_pcie_ep_raise_legacy_irq(ep, func_no); + case PCI_EPC_IRQ_MSI: + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); + default: + dev_err(pci->dev, "Unknown IRQ type\n"); + return -EINVAL; + } +} + +static const struct pci_epc_features qcom_pcie_epc_features = { + .linkup_notifier = true, + .core_init_notifier = true, + .msi_capable = true, + .msix_capable = false, +}; + +static const struct pci_epc_features * +qcom_pcie_epc_get_features(struct dw_pcie_ep *pci_ep) +{ + return &qcom_pcie_epc_features; +} + +static void qcom_pcie_ep_init(struct dw_pcie_ep *ep) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + enum pci_barno bar; + + for (bar = BAR_0; bar <= BAR_5; bar++) + dw_pcie_ep_reset_bar(pci, bar); +} + +static struct dw_pcie_ep_ops pci_ep_ops = { + .ep_init = qcom_pcie_ep_init, + .raise_irq = qcom_pcie_ep_raise_irq, + .get_features = qcom_pcie_epc_get_features, +}; + +static int qcom_pcie_ep_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct qcom_pcie_ep *pcie_ep; + int ret; + + pcie_ep = devm_kzalloc(dev, sizeof(*pcie_ep), GFP_KERNEL); + if (!pcie_ep) + return -ENOMEM; + + pcie_ep->pci.dev = dev; + pcie_ep->pci.ops = &pci_ops; + pcie_ep->pci.ep.ops = &pci_ep_ops; + platform_set_drvdata(pdev, pcie_ep); + + ret = qcom_pcie_ep_get_resources(pdev, pcie_ep); + if (ret) + return ret; + + ret = clk_bulk_prepare_enable(ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + if (ret) + return ret; + + ret = qcom_pcie_ep_core_reset(pcie_ep); + if (ret) + goto err_disable_clk; + + ret = phy_init(pcie_ep->phy); + if (ret) + goto err_disable_clk; + + /* PHY needs to be powered on for dw_pcie_ep_init() */ + ret = phy_power_on(pcie_ep->phy); + if (ret) + goto err_phy_exit; + + ret = dw_pcie_ep_init(&pcie_ep->pci.ep); + if (ret) { + dev_err(dev, "Failed to initialize endpoint: %d\n", ret); + goto err_phy_power_off; + } + + ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep); + if (ret) + goto err_phy_power_off; + + return 0; + +err_phy_power_off: + phy_power_off(pcie_ep->phy); +err_phy_exit: + phy_exit(pcie_ep->phy); +err_disable_clk: + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + + return ret; +} + +static int qcom_pcie_ep_remove(struct platform_device *pdev) +{ + struct qcom_pcie_ep *pcie_ep = platform_get_drvdata(pdev); + + if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) + return 0; + + phy_power_off(pcie_ep->phy); + phy_exit(pcie_ep->phy); + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), + qcom_pcie_ep_clks); + + return 0; +} + +static const struct of_device_id qcom_pcie_ep_match[] = { + { .compatible = "qcom,sdx55-pcie-ep", }, + { } +}; + +static struct platform_driver qcom_pcie_ep_driver = { + .probe = qcom_pcie_ep_probe, + .remove = qcom_pcie_ep_remove, + .driver = { + .name = "qcom-pcie-ep", + .of_match_table = qcom_pcie_ep_match, + }, +}; +builtin_platform_driver(qcom_pcie_ep_driver); + +MODULE_AUTHOR("Siddartha Mohanadoss <smohanad@codeaurora.org>"); +MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>"); +MODULE_DESCRIPTION("Qualcomm PCIe Endpoint controller driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 8a7a300163e5..1c3d1116bb60 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -166,6 +166,9 @@ struct qcom_pcie_resources_2_7_0 { struct regulator_bulk_data supplies[2]; struct reset_control *pci_reset; struct clk *pipe_clk; + struct clk *pipe_clk_src; + struct clk *phy_pipe_clk; + struct clk *ref_clk_src; }; union qcom_pcie_resources { @@ -189,6 +192,11 @@ struct qcom_pcie_ops { int (*config_sid)(struct qcom_pcie *pcie); }; +struct qcom_pcie_cfg { + const struct qcom_pcie_ops *ops; + unsigned int pipe_clk_need_muxing:1; +}; + struct qcom_pcie { struct dw_pcie *pci; void __iomem *parf; /* DT parf */ @@ -197,6 +205,7 @@ struct qcom_pcie { struct phy *phy; struct gpio_desc *reset; const struct qcom_pcie_ops *ops; + unsigned int pipe_clk_need_muxing:1; }; #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) @@ -1167,6 +1176,20 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie) if (ret < 0) return ret; + if (pcie->pipe_clk_need_muxing) { + res->pipe_clk_src = devm_clk_get(dev, "pipe_mux"); + if (IS_ERR(res->pipe_clk_src)) + return PTR_ERR(res->pipe_clk_src); + + res->phy_pipe_clk = devm_clk_get(dev, "phy_pipe"); + if (IS_ERR(res->phy_pipe_clk)) + return PTR_ERR(res->phy_pipe_clk); + + res->ref_clk_src = devm_clk_get(dev, "ref"); + if (IS_ERR(res->ref_clk_src)) + return PTR_ERR(res->ref_clk_src); + } + res->pipe_clk = devm_clk_get(dev, "pipe"); return PTR_ERR_OR_ZERO(res->pipe_clk); } @@ -1185,6 +1208,10 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie) return ret; } + /* Set TCXO as clock source for pcie_pipe_clk_src */ + if (pcie->pipe_clk_need_muxing) + clk_set_parent(res->pipe_clk_src, res->ref_clk_src); + ret = clk_bulk_prepare_enable(res->num_clks, res->clks); if (ret < 0) goto err_disable_regulators; @@ -1256,6 +1283,10 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) { struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; + /* Set pipe clock as clock source for pcie_pipe_clk_src */ + if (pcie->pipe_clk_need_muxing) + clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk); + return clk_prepare_enable(res->pipe_clk); } @@ -1456,6 +1487,39 @@ static const struct qcom_pcie_ops ops_1_9_0 = { .config_sid = qcom_pcie_config_sid_sm8250, }; +static const struct qcom_pcie_cfg apq8084_cfg = { + .ops = &ops_1_0_0, +}; + +static const struct qcom_pcie_cfg ipq8064_cfg = { + .ops = &ops_2_1_0, +}; + +static const struct qcom_pcie_cfg msm8996_cfg = { + .ops = &ops_2_3_2, +}; + +static const struct qcom_pcie_cfg ipq8074_cfg = { + .ops = &ops_2_3_3, +}; + +static const struct qcom_pcie_cfg ipq4019_cfg = { + .ops = &ops_2_4_0, +}; + +static const struct qcom_pcie_cfg sdm845_cfg = { + .ops = &ops_2_7_0, +}; + +static const struct qcom_pcie_cfg sm8250_cfg = { + .ops = &ops_1_9_0, +}; + +static const struct qcom_pcie_cfg sc7280_cfg = { + .ops = &ops_1_9_0, + .pipe_clk_need_muxing = true, +}; + static const struct dw_pcie_ops dw_pcie_ops = { .link_up = qcom_pcie_link_up, .start_link = qcom_pcie_start_link, @@ -1467,6 +1531,7 @@ static int qcom_pcie_probe(struct platform_device *pdev) struct pcie_port *pp; struct dw_pcie *pci; struct qcom_pcie *pcie; + const struct qcom_pcie_cfg *pcie_cfg; int ret; pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); @@ -1488,7 +1553,14 @@ static int qcom_pcie_probe(struct platform_device *pdev) pcie->pci = pci; - pcie->ops = of_device_get_match_data(dev); + pcie_cfg = of_device_get_match_data(dev); + if (!pcie_cfg || !pcie_cfg->ops) { + dev_err(dev, "Invalid platform data\n"); + return -EINVAL; + } + + pcie->ops = pcie_cfg->ops; + pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing; pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); if (IS_ERR(pcie->reset)) { @@ -1545,16 +1617,18 @@ err_pm_runtime_put: } static const struct of_device_id qcom_pcie_match[] = { - { .compatible = "qcom,pcie-apq8084", .data = &ops_1_0_0 }, - { .compatible = "qcom,pcie-ipq8064", .data = &ops_2_1_0 }, - { .compatible = "qcom,pcie-ipq8064-v2", .data = &ops_2_1_0 }, - { .compatible = "qcom,pcie-apq8064", .data = &ops_2_1_0 }, - { .compatible = "qcom,pcie-msm8996", .data = &ops_2_3_2 }, - { .compatible = "qcom,pcie-ipq8074", .data = &ops_2_3_3 }, - { .compatible = "qcom,pcie-ipq4019", .data = &ops_2_4_0 }, - { .compatible = "qcom,pcie-qcs404", .data = &ops_2_4_0 }, - { .compatible = "qcom,pcie-sdm845", .data = &ops_2_7_0 }, - { .compatible = "qcom,pcie-sm8250", .data = &ops_1_9_0 }, + { .compatible = "qcom,pcie-apq8084", .data = &apq8084_cfg }, + { .compatible = "qcom,pcie-ipq8064", .data = &ipq8064_cfg }, + { .compatible = "qcom,pcie-ipq8064-v2", .data = &ipq8064_cfg }, + { .compatible = "qcom,pcie-apq8064", .data = &ipq8064_cfg }, + { .compatible = "qcom,pcie-msm8996", .data = &msm8996_cfg }, + { .compatible = "qcom,pcie-ipq8074", .data = &ipq8074_cfg }, + { .compatible = "qcom,pcie-ipq4019", .data = &ipq4019_cfg }, + { .compatible = "qcom,pcie-qcs404", .data = &ipq4019_cfg }, + { .compatible = "qcom,pcie-sdm845", .data = &sdm845_cfg }, + { .compatible = "qcom,pcie-sm8250", .data = &sm8250_cfg }, + { .compatible = "qcom,pcie-sc8180x", .data = &sm8250_cfg }, + { .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg }, { } }; diff --git a/drivers/pci/controller/dwc/pcie-uniphier.c b/drivers/pci/controller/dwc/pcie-uniphier.c index d842fd018129..d05be942956e 100644 --- a/drivers/pci/controller/dwc/pcie-uniphier.c +++ b/drivers/pci/controller/dwc/pcie-uniphier.c @@ -168,30 +168,21 @@ static void uniphier_pcie_irq_enable(struct uniphier_pcie_priv *priv) writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX); } -static void uniphier_pcie_irq_ack(struct irq_data *d) -{ - struct pcie_port *pp = irq_data_get_irq_chip_data(d); - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); - u32 val; - - val = readl(priv->base + PCL_RCV_INTX); - val &= ~PCL_RCV_INTX_ALL_STATUS; - val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_STATUS_SHIFT); - writel(val, priv->base + PCL_RCV_INTX); -} - static void uniphier_pcie_irq_mask(struct irq_data *d) { struct pcie_port *pp = irq_data_get_irq_chip_data(d); struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); + unsigned long flags; u32 val; + raw_spin_lock_irqsave(&pp->lock, flags); + val = readl(priv->base + PCL_RCV_INTX); - val &= ~PCL_RCV_INTX_ALL_MASK; val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT); writel(val, priv->base + PCL_RCV_INTX); + + raw_spin_unlock_irqrestore(&pp->lock, flags); } static void uniphier_pcie_irq_unmask(struct irq_data *d) @@ -199,17 +190,20 @@ static void uniphier_pcie_irq_unmask(struct irq_data *d) struct pcie_port *pp = irq_data_get_irq_chip_data(d); struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); + unsigned long flags; u32 val; + raw_spin_lock_irqsave(&pp->lock, flags); + val = readl(priv->base + PCL_RCV_INTX); - val &= ~PCL_RCV_INTX_ALL_MASK; val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT); writel(val, priv->base + PCL_RCV_INTX); + + raw_spin_unlock_irqrestore(&pp->lock, flags); } static struct irq_chip uniphier_pcie_irq_chip = { .name = "PCI", - .irq_ack = uniphier_pcie_irq_ack, .irq_mask = uniphier_pcie_irq_mask, .irq_unmask = uniphier_pcie_irq_unmask, }; diff --git a/drivers/pci/controller/dwc/pcie-visconti.c b/drivers/pci/controller/dwc/pcie-visconti.c index a88eab6829bb..50f80f07e4db 100644 --- a/drivers/pci/controller/dwc/pcie-visconti.c +++ b/drivers/pci/controller/dwc/pcie-visconti.c @@ -279,13 +279,10 @@ static int visconti_add_pcie_port(struct visconti_pcie *pcie, { struct dw_pcie *pci = &pcie->pci; struct pcie_port *pp = &pci->pp; - struct device *dev = &pdev->dev; pp->irq = platform_get_irq_byname(pdev, "intr"); - if (pp->irq < 0) { - dev_err(dev, "Interrupt intr is missing"); + if (pp->irq < 0) return pp->irq; - } pp->ops = &visconti_pcie_host_ops; diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index 596ebcfcc82d..c5300d49807a 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c @@ -31,10 +31,8 @@ /* PCIe core registers */ #define PCIE_CORE_DEV_ID_REG 0x0 #define PCIE_CORE_CMD_STATUS_REG 0x4 -#define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) -#define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1) -#define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) #define PCIE_CORE_DEV_REV_REG 0x8 +#define PCIE_CORE_EXP_ROM_BAR_REG 0x30 #define PCIE_CORE_PCIEXP_CAP 0xc0 #define PCIE_CORE_ERR_CAPCTL_REG 0x118 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) @@ -99,6 +97,7 @@ #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) #define PCIE_CORE_REF_CLK_REG (CONTROL_BASE_ADDR + 0x14) #define PCIE_CORE_REF_CLK_TX_ENABLE BIT(1) +#define PCIE_CORE_REF_CLK_RX_ENABLE BIT(2) #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) #define PCIE_MSG_PM_PME_MASK BIT(7) @@ -106,18 +105,19 @@ #define PCIE_ISR0_MSI_INT_PENDING BIT(24) #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val)) -#define PCIE_ISR0_ALL_MASK GENMASK(26, 0) +#define PCIE_ISR0_ALL_MASK GENMASK(31, 0) #define PCIE_ISR1_REG (CONTROL_BASE_ADDR + 0x48) #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) #define PCIE_ISR1_FLUSH BIT(5) #define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val)) -#define PCIE_ISR1_ALL_MASK GENMASK(11, 4) +#define PCIE_ISR1_ALL_MASK GENMASK(31, 0) #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) +#define PCIE_MSI_DATA_MASK GENMASK(15, 0) /* PCIe window configuration */ #define OB_WIN_BASE_ADDR 0x4c00 @@ -164,8 +164,50 @@ #define CFG_REG (LMI_BASE_ADDR + 0x0) #define LTSSM_SHIFT 24 #define LTSSM_MASK 0x3f -#define LTSSM_L0 0x10 #define RC_BAR_CONFIG 0x300 + +/* LTSSM values in CFG_REG */ +enum { + LTSSM_DETECT_QUIET = 0x0, + LTSSM_DETECT_ACTIVE = 0x1, + LTSSM_POLLING_ACTIVE = 0x2, + LTSSM_POLLING_COMPLIANCE = 0x3, + LTSSM_POLLING_CONFIGURATION = 0x4, + LTSSM_CONFIG_LINKWIDTH_START = 0x5, + LTSSM_CONFIG_LINKWIDTH_ACCEPT = 0x6, + LTSSM_CONFIG_LANENUM_ACCEPT = 0x7, + LTSSM_CONFIG_LANENUM_WAIT = 0x8, + LTSSM_CONFIG_COMPLETE = 0x9, + LTSSM_CONFIG_IDLE = 0xa, + LTSSM_RECOVERY_RCVR_LOCK = 0xb, + LTSSM_RECOVERY_SPEED = 0xc, + LTSSM_RECOVERY_RCVR_CFG = 0xd, + LTSSM_RECOVERY_IDLE = 0xe, + LTSSM_L0 = 0x10, + LTSSM_RX_L0S_ENTRY = 0x11, + LTSSM_RX_L0S_IDLE = 0x12, + LTSSM_RX_L0S_FTS = 0x13, + LTSSM_TX_L0S_ENTRY = 0x14, + LTSSM_TX_L0S_IDLE = 0x15, + LTSSM_TX_L0S_FTS = 0x16, + LTSSM_L1_ENTRY = 0x17, + LTSSM_L1_IDLE = 0x18, + LTSSM_L2_IDLE = 0x19, + LTSSM_L2_TRANSMIT_WAKE = 0x1a, + LTSSM_DISABLED = 0x20, + LTSSM_LOOPBACK_ENTRY_MASTER = 0x21, + LTSSM_LOOPBACK_ACTIVE_MASTER = 0x22, + LTSSM_LOOPBACK_EXIT_MASTER = 0x23, + LTSSM_LOOPBACK_ENTRY_SLAVE = 0x24, + LTSSM_LOOPBACK_ACTIVE_SLAVE = 0x25, + LTSSM_LOOPBACK_EXIT_SLAVE = 0x26, + LTSSM_HOT_RESET = 0x27, + LTSSM_RECOVERY_EQUALIZATION_PHASE0 = 0x28, + LTSSM_RECOVERY_EQUALIZATION_PHASE1 = 0x29, + LTSSM_RECOVERY_EQUALIZATION_PHASE2 = 0x2a, + LTSSM_RECOVERY_EQUALIZATION_PHASE3 = 0x2b, +}; + #define VENDOR_ID_REG (LMI_BASE_ADDR + 0x44) /* PCIe core controller registers */ @@ -198,7 +240,7 @@ #define PCIE_IRQ_MSI_INT2_DET BIT(21) #define PCIE_IRQ_RC_DBELL_DET BIT(22) #define PCIE_IRQ_EP_STATUS BIT(23) -#define PCIE_IRQ_ALL_MASK 0xfff0fb +#define PCIE_IRQ_ALL_MASK GENMASK(31, 0) #define PCIE_IRQ_ENABLE_INTS_MASK PCIE_IRQ_CORE_INT /* Transaction types */ @@ -257,18 +299,49 @@ static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg) return readl(pcie->base + reg); } -static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg) +static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie) { - return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8); + u32 val; + u8 ltssm_state; + + val = advk_readl(pcie, CFG_REG); + ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK; + return ltssm_state; } -static int advk_pcie_link_up(struct advk_pcie *pcie) +static inline bool advk_pcie_link_up(struct advk_pcie *pcie) { - u32 val, ltssm_state; + /* check if LTSSM is in normal operation - some L* state */ + u8 ltssm_state = advk_pcie_ltssm_state(pcie); + return ltssm_state >= LTSSM_L0 && ltssm_state < LTSSM_DISABLED; +} - val = advk_readl(pcie, CFG_REG); - ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK; - return ltssm_state >= LTSSM_L0; +static inline bool advk_pcie_link_active(struct advk_pcie *pcie) +{ + /* + * According to PCIe Base specification 3.0, Table 4-14: Link + * Status Mapped to the LTSSM, and 4.2.6.3.6 Configuration.Idle + * is Link Up mapped to LTSSM Configuration.Idle, Recovery, L0, + * L0s, L1 and L2 states. And according to 3.2.1. Data Link + * Control and Management State Machine Rules is DL Up status + * reported in DL Active state. + */ + u8 ltssm_state = advk_pcie_ltssm_state(pcie); + return ltssm_state >= LTSSM_CONFIG_IDLE && ltssm_state < LTSSM_DISABLED; +} + +static inline bool advk_pcie_link_training(struct advk_pcie *pcie) +{ + /* + * According to PCIe Base specification 3.0, Table 4-14: Link + * Status Mapped to the LTSSM is Link Training mapped to LTSSM + * Configuration and Recovery states. + */ + u8 ltssm_state = advk_pcie_ltssm_state(pcie); + return ((ltssm_state >= LTSSM_CONFIG_LINKWIDTH_START && + ltssm_state < LTSSM_L0) || + (ltssm_state >= LTSSM_RECOVERY_EQUALIZATION_PHASE0 && + ltssm_state <= LTSSM_RECOVERY_EQUALIZATION_PHASE3)); } static int advk_pcie_wait_for_link(struct advk_pcie *pcie) @@ -291,7 +364,7 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie) size_t retries; for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) { - if (!advk_pcie_link_up(pcie)) + if (advk_pcie_link_training(pcie)) break; udelay(RETRAIN_WAIT_USLEEP_US); } @@ -299,23 +372,9 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie) static void advk_pcie_issue_perst(struct advk_pcie *pcie) { - u32 reg; - if (!pcie->reset_gpio) return; - /* - * As required by PCI Express spec (PCI Express Base Specification, REV. - * 4.0 PCI Express, February 19 2014, 6.6.1 Conventional Reset) a delay - * for at least 100ms after de-asserting PERST# signal is needed before - * link training is enabled. So ensure that link training is disabled - * prior de-asserting PERST# signal to fulfill that PCI Express spec - * requirement. - */ - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); - reg &= ~LINK_TRAINING_EN; - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); - /* 10ms delay is needed for some cards */ dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); gpiod_set_value_cansleep(pcie->reset_gpio, 1); @@ -323,54 +382,47 @@ static void advk_pcie_issue_perst(struct advk_pcie *pcie) gpiod_set_value_cansleep(pcie->reset_gpio, 0); } -static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen) +static void advk_pcie_train_link(struct advk_pcie *pcie) { - int ret, neg_gen; + struct device *dev = &pcie->pdev->dev; u32 reg; + int ret; - /* Setup link speed */ + /* + * Setup PCIe rev / gen compliance based on device tree property + * 'max-link-speed' which also forces maximal link speed. + */ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); reg &= ~PCIE_GEN_SEL_MSK; - if (gen == 3) + if (pcie->link_gen == 3) reg |= SPEED_GEN_3; - else if (gen == 2) + else if (pcie->link_gen == 2) reg |= SPEED_GEN_2; else reg |= SPEED_GEN_1; advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); /* - * Enable link training. This is not needed in every call to this - * function, just once suffices, but it does not break anything either. + * Set maximal link speed value also into PCIe Link Control 2 register. + * Armada 3700 Functional Specification says that default value is based + * on SPEED_GEN but tests showed that default value is always 8.0 GT/s. */ + reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2); + reg &= ~PCI_EXP_LNKCTL2_TLS; + if (pcie->link_gen == 3) + reg |= PCI_EXP_LNKCTL2_TLS_8_0GT; + else if (pcie->link_gen == 2) + reg |= PCI_EXP_LNKCTL2_TLS_5_0GT; + else + reg |= PCI_EXP_LNKCTL2_TLS_2_5GT; + advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2); + + /* Enable link training after selecting PCIe generation */ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); reg |= LINK_TRAINING_EN; advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); /* - * Start link training immediately after enabling it. - * This solves problems for some buggy cards. - */ - reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); - reg |= PCI_EXP_LNKCTL_RL; - advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); - - ret = advk_pcie_wait_for_link(pcie); - if (ret) - return ret; - - reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA); - neg_gen = reg & PCI_EXP_LNKSTA_CLS; - - return neg_gen; -} - -static void advk_pcie_train_link(struct advk_pcie *pcie) -{ - struct device *dev = &pcie->pdev->dev; - int neg_gen = -1, gen; - - /* * Reset PCIe card via PERST# signal. Some cards are not detected * during link training when they are in some non-initial state. */ @@ -380,41 +432,18 @@ static void advk_pcie_train_link(struct advk_pcie *pcie) * PERST# signal could have been asserted by pinctrl subsystem before * probe() callback has been called or issued explicitly by reset gpio * function advk_pcie_issue_perst(), making the endpoint going into - * fundamental reset. As required by PCI Express spec a delay for at - * least 100ms after such a reset before link training is needed. + * fundamental reset. As required by PCI Express spec (PCI Express + * Base Specification, REV. 4.0 PCI Express, February 19 2014, 6.6.1 + * Conventional Reset) a delay for at least 100ms after such a reset + * before sending a Configuration Request to the device is needed. + * So wait until PCIe link is up. Function advk_pcie_wait_for_link() + * waits for link at least 900ms. */ - msleep(PCI_PM_D3COLD_WAIT); - - /* - * Try link training at link gen specified by device tree property - * 'max-link-speed'. If this fails, iteratively train at lower gen. - */ - for (gen = pcie->link_gen; gen > 0; --gen) { - neg_gen = advk_pcie_train_at_gen(pcie, gen); - if (neg_gen > 0) - break; - } - - if (neg_gen < 0) - goto err; - - /* - * After successful training if negotiated gen is lower than requested, - * train again on negotiated gen. This solves some stability issues for - * some buggy gen1 cards. - */ - if (neg_gen < gen) { - gen = neg_gen; - neg_gen = advk_pcie_train_at_gen(pcie, gen); - } - - if (neg_gen == gen) { - dev_info(dev, "link up at gen %i\n", gen); - return; - } - -err: - dev_err(dev, "link never came up\n"); + ret = advk_pcie_wait_for_link(pcie); + if (ret < 0) + dev_err(dev, "link never came up\n"); + else + dev_info(dev, "link up\n"); } /* @@ -451,9 +480,15 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) u32 reg; int i; - /* Enable TX */ + /* + * Configure PCIe Reference clock. Direction is from the PCIe + * controller to the endpoint card, so enable transmitting of + * Reference clock differential signal off-chip and disable + * receiving off-chip differential signal. + */ reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); reg |= PCIE_CORE_REF_CLK_TX_ENABLE; + reg &= ~PCIE_CORE_REF_CLK_RX_ENABLE; advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG); /* Set to Direct mode */ @@ -477,6 +512,31 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL; advk_writel(pcie, reg, VENDOR_ID_REG); + /* + * Change Class Code of PCI Bridge device to PCI Bridge (0x600400), + * because the default value is Mass storage controller (0x010400). + * + * Note that this Aardvark PCI Bridge does not have compliant Type 1 + * Configuration Space and it even cannot be accessed via Aardvark's + * PCI config space access method. Something like config space is + * available in internal Aardvark registers starting at offset 0x0 + * and is reported as Type 0. In range 0x10 - 0x34 it has totally + * different registers. + * + * Therefore driver uses emulation of PCI Bridge which emulates + * access to configuration space via internal Aardvark registers or + * emulated configuration buffer. + */ + reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG); + reg &= ~0xffffff00; + reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8; + advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG); + + /* Disable Root Bridge I/O space, memory space and bus mastering */ + reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); + reg &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); + advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); + /* Set Advanced Error Capabilities and Control PF0 register */ reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX | PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN | @@ -488,8 +548,9 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); reg &= ~PCI_EXP_DEVCTL_RELAX_EN; reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN; + reg &= ~PCI_EXP_DEVCTL_PAYLOAD; reg &= ~PCI_EXP_DEVCTL_READRQ; - reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */ + reg |= PCI_EXP_DEVCTL_PAYLOAD_512B; reg |= PCI_EXP_DEVCTL_READRQ_512B; advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); @@ -574,19 +635,6 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) advk_pcie_disable_ob_win(pcie, i); advk_pcie_train_link(pcie); - - /* - * FIXME: The following register update is suspicious. This register is - * applicable only when the PCI controller is configured for Endpoint - * mode, not as a Root Complex. But apparently when this code is - * removed, some cards stop working. This should be investigated and - * a comment explaining this should be put here. - */ - reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); - reg |= PCIE_CORE_CMD_MEM_ACCESS_EN | - PCIE_CORE_CMD_IO_ACCESS_EN | - PCIE_CORE_CMD_MEM_IO_REQ_EN; - advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); } static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val) @@ -595,6 +643,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3 u32 reg; unsigned int status; char *strcomp_status, *str_posted; + int ret; reg = advk_readl(pcie, PIO_STAT); status = (reg & PIO_COMPLETION_STATUS_MASK) >> @@ -619,6 +668,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3 case PIO_COMPLETION_STATUS_OK: if (reg & PIO_ERR_STATUS) { strcomp_status = "COMP_ERR"; + ret = -EFAULT; break; } /* Get the read result */ @@ -626,9 +676,11 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3 *val = advk_readl(pcie, PIO_RD_DATA); /* No error */ strcomp_status = NULL; + ret = 0; break; case PIO_COMPLETION_STATUS_UR: strcomp_status = "UR"; + ret = -EOPNOTSUPP; break; case PIO_COMPLETION_STATUS_CRS: if (allow_crs && val) { @@ -646,6 +698,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3 */ *val = CFG_RD_CRS_VAL; strcomp_status = NULL; + ret = 0; break; } /* PCIe r4.0, sec 2.3.2, says: @@ -661,31 +714,34 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3 * Request and taking appropriate action, e.g., complete the * Request to the host as a failed transaction. * - * To simplify implementation do not re-issue the Configuration - * Request and complete the Request as a failed transaction. + * So return -EAGAIN and caller (pci-aardvark.c driver) will + * re-issue request again up to the PIO_RETRY_CNT retries. */ strcomp_status = "CRS"; + ret = -EAGAIN; break; case PIO_COMPLETION_STATUS_CA: strcomp_status = "CA"; + ret = -ECANCELED; break; default: strcomp_status = "Unknown"; + ret = -EINVAL; break; } if (!strcomp_status) - return 0; + return ret; if (reg & PIO_NON_POSTED_REQ) str_posted = "Non-posted"; else str_posted = "Posted"; - dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n", + dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n", str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS)); - return -EFAULT; + return ret; } static int advk_pcie_wait_pio(struct advk_pcie *pcie) @@ -693,13 +749,13 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie) struct device *dev = &pcie->pdev->dev; int i; - for (i = 0; i < PIO_RETRY_CNT; i++) { + for (i = 1; i <= PIO_RETRY_CNT; i++) { u32 start, isr; start = advk_readl(pcie, PIO_START); isr = advk_readl(pcie, PIO_ISR); if (!start && isr) - return 0; + return i; udelay(PIO_RETRY_DELAY); } @@ -707,6 +763,72 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie) return -ETIMEDOUT; } +static pci_bridge_emul_read_status_t +advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge, + int reg, u32 *value) +{ + struct advk_pcie *pcie = bridge->data; + + switch (reg) { + case PCI_COMMAND: + *value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); + return PCI_BRIDGE_EMUL_HANDLED; + + case PCI_ROM_ADDRESS1: + *value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG); + return PCI_BRIDGE_EMUL_HANDLED; + + case PCI_INTERRUPT_LINE: { + /* + * From the whole 32bit register we support reading from HW only + * one bit: PCI_BRIDGE_CTL_BUS_RESET. + * Other bits are retrieved only from emulated config buffer. + */ + __le32 *cfgspace = (__le32 *)&bridge->conf; + u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]); + if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN) + val |= PCI_BRIDGE_CTL_BUS_RESET << 16; + else + val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16); + *value = val; + return PCI_BRIDGE_EMUL_HANDLED; + } + + default: + return PCI_BRIDGE_EMUL_NOT_HANDLED; + } +} + +static void +advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge, + int reg, u32 old, u32 new, u32 mask) +{ + struct advk_pcie *pcie = bridge->data; + + switch (reg) { + case PCI_COMMAND: + advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG); + break; + + case PCI_ROM_ADDRESS1: + advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG); + break; + + case PCI_INTERRUPT_LINE: + if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) { + u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG); + if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16)) + val |= HOT_RESET_GEN; + else + val &= ~HOT_RESET_GEN; + advk_writel(pcie, val, PCIE_CORE_CTRL1_REG); + } + break; + + default: + break; + } +} static pci_bridge_emul_read_status_t advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, @@ -723,6 +845,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, case PCI_EXP_RTCTL: { u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; + *value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE; *value |= PCI_EXP_RTCAP_CRSVIS << 16; return PCI_BRIDGE_EMUL_HANDLED; } @@ -734,12 +857,26 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, return PCI_BRIDGE_EMUL_HANDLED; } + case PCI_EXP_LNKCAP: { + u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); + /* + * PCI_EXP_LNKCAP_DLLLARC bit is hardwired in aardvark HW to 0. + * But support for PCI_EXP_LNKSTA_DLLLA is emulated via ltssm + * state so explicitly enable PCI_EXP_LNKCAP_DLLLARC flag. + */ + val |= PCI_EXP_LNKCAP_DLLLARC; + *value = val; + return PCI_BRIDGE_EMUL_HANDLED; + } + case PCI_EXP_LNKCTL: { /* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */ u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) & ~(PCI_EXP_LNKSTA_LT << 16); - if (!advk_pcie_link_up(pcie)) + if (advk_pcie_link_training(pcie)) val |= (PCI_EXP_LNKSTA_LT << 16); + if (advk_pcie_link_active(pcie)) + val |= (PCI_EXP_LNKSTA_DLLLA << 16); *value = val; return PCI_BRIDGE_EMUL_HANDLED; } @@ -747,7 +884,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, case PCI_CAP_LIST_ID: case PCI_EXP_DEVCAP: case PCI_EXP_DEVCTL: - case PCI_EXP_LNKCAP: *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); return PCI_BRIDGE_EMUL_HANDLED; default: @@ -794,6 +930,8 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, } static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { + .read_base = advk_pci_bridge_emul_base_conf_read, + .write_base = advk_pci_bridge_emul_base_conf_write, .read_pcie = advk_pci_bridge_emul_pcie_conf_read, .write_pcie = advk_pci_bridge_emul_pcie_conf_write, }; @@ -805,7 +943,6 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { static int advk_sw_pci_bridge_init(struct advk_pcie *pcie) { struct pci_bridge_emul *bridge = &pcie->bridge; - int ret; bridge->conf.vendor = cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff); @@ -825,19 +962,14 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie) /* Support interrupt A for MSI feature */ bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; + /* Indicates supports for Completion Retry Status */ + bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS); + bridge->has_pcie = true; bridge->data = pcie; bridge->ops = &advk_pci_bridge_emul_ops; - /* PCIe config space can be initialized after pci_bridge_emul_init() */ - ret = pci_bridge_emul_init(bridge, 0); - if (ret < 0) - return ret; - - /* Indicates supports for Completion Retry Status */ - bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS); - - return 0; + return pci_bridge_emul_init(bridge, 0); } static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, @@ -889,6 +1021,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, int size, u32 *val) { struct advk_pcie *pcie = bus->sysdata; + int retry_count; bool allow_crs; u32 reg; int ret; @@ -911,18 +1044,8 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE); - if (advk_pcie_pio_is_running(pcie)) { - /* - * If it is possible return Completion Retry Status so caller - * tries to issue the request again instead of failing. - */ - if (allow_crs) { - *val = CFG_RD_CRS_VAL; - return PCIBIOS_SUCCESSFUL; - } - *val = 0xffffffff; - return PCIBIOS_SET_FAILED; - } + if (advk_pcie_pio_is_running(pcie)) + goto try_crs; /* Program the control register */ reg = advk_readl(pcie, PIO_CTRL); @@ -941,30 +1064,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, /* Program the data strobe */ advk_writel(pcie, 0xf, PIO_WR_DATA_STRB); - /* Clear PIO DONE ISR and start the transfer */ - advk_writel(pcie, 1, PIO_ISR); - advk_writel(pcie, 1, PIO_START); + retry_count = 0; + do { + /* Clear PIO DONE ISR and start the transfer */ + advk_writel(pcie, 1, PIO_ISR); + advk_writel(pcie, 1, PIO_START); - ret = advk_pcie_wait_pio(pcie); - if (ret < 0) { - /* - * If it is possible return Completion Retry Status so caller - * tries to issue the request again instead of failing. - */ - if (allow_crs) { - *val = CFG_RD_CRS_VAL; - return PCIBIOS_SUCCESSFUL; - } - *val = 0xffffffff; - return PCIBIOS_SET_FAILED; - } + ret = advk_pcie_wait_pio(pcie); + if (ret < 0) + goto try_crs; - /* Check PIO status and get the read result */ - ret = advk_pcie_check_pio_status(pcie, allow_crs, val); - if (ret < 0) { - *val = 0xffffffff; - return PCIBIOS_SET_FAILED; - } + retry_count += ret; + + /* Check PIO status and get the read result */ + ret = advk_pcie_check_pio_status(pcie, allow_crs, val); + } while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT); + + if (ret < 0) + goto fail; if (size == 1) *val = (*val >> (8 * (where & 3))) & 0xff; @@ -972,6 +1089,20 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, *val = (*val >> (8 * (where & 3))) & 0xffff; return PCIBIOS_SUCCESSFUL; + +try_crs: + /* + * If it is possible, return Completion Retry Status so that caller + * tries to issue the request again instead of failing. + */ + if (allow_crs) { + *val = CFG_RD_CRS_VAL; + return PCIBIOS_SUCCESSFUL; + } + +fail: + *val = 0xffffffff; + return PCIBIOS_SET_FAILED; } static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, @@ -980,6 +1111,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, struct advk_pcie *pcie = bus->sysdata; u32 reg; u32 data_strobe = 0x0; + int retry_count; int offset; int ret; @@ -1021,19 +1153,22 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, /* Program the data strobe */ advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB); - /* Clear PIO DONE ISR and start the transfer */ - advk_writel(pcie, 1, PIO_ISR); - advk_writel(pcie, 1, PIO_START); + retry_count = 0; + do { + /* Clear PIO DONE ISR and start the transfer */ + advk_writel(pcie, 1, PIO_ISR); + advk_writel(pcie, 1, PIO_START); - ret = advk_pcie_wait_pio(pcie); - if (ret < 0) - return PCIBIOS_SET_FAILED; + ret = advk_pcie_wait_pio(pcie); + if (ret < 0) + return PCIBIOS_SET_FAILED; - ret = advk_pcie_check_pio_status(pcie, false, NULL); - if (ret < 0) - return PCIBIOS_SET_FAILED; + retry_count += ret; - return PCIBIOS_SUCCESSFUL; + ret = advk_pcie_check_pio_status(pcie, false, NULL); + } while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT); + + return ret < 0 ? PCIBIOS_SET_FAILED : PCIBIOS_SUCCESSFUL; } static struct pci_ops advk_pcie_ops = { @@ -1082,7 +1217,7 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain, domain->host_data, handle_simple_irq, NULL, NULL); - return hwirq; + return 0; } static void advk_msi_irq_domain_free(struct irq_domain *domain, @@ -1263,8 +1398,12 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie) if (!(BIT(msi_idx) & msi_status)) continue; + /* + * msi_idx contains bits [4:0] of the msi_data and msi_data + * contains 16bit MSI interrupt number + */ advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG); - msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & 0xFF; + msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK; generic_handle_irq(msi_data); } @@ -1286,12 +1425,6 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie) isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); - if (!isr0_status && !isr1_status) { - advk_writel(pcie, isr0_val, PCIE_ISR0_REG); - advk_writel(pcie, isr1_val, PCIE_ISR1_REG); - return; - } - /* Process MSI interrupts */ if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) advk_pcie_handle_msi(pcie); diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 67c46e52c0dc..6733cb14e775 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -3126,14 +3126,14 @@ static int hv_pci_probe(struct hv_device *hdev, if (dom == HVPCI_DOM_INVALID) { dev_err(&hdev->device, - "Unable to use dom# 0x%hx or other numbers", dom_req); + "Unable to use dom# 0x%x or other numbers", dom_req); ret = -EINVAL; goto free_bus; } if (dom != dom_req) dev_info(&hdev->device, - "PCI dom# 0x%hx has collision, using 0x%hx", + "PCI dom# 0x%x has collision, using 0x%x", dom_req, dom); hbus->bridge->domain_nr = dom; diff --git a/drivers/pci/controller/pci-thunder-ecam.c b/drivers/pci/controller/pci-thunder-ecam.c index ffd84656544f..e9d5ca245f5e 100644 --- a/drivers/pci/controller/pci-thunder-ecam.c +++ b/drivers/pci/controller/pci-thunder-ecam.c @@ -17,7 +17,7 @@ static void set_val(u32 v, int where, int size, u32 *val) { int shift = (where & 3) * 8; - pr_debug("set_val %04x: %08x\n", (unsigned)(where & ~3), v); + pr_debug("set_val %04x: %08x\n", (unsigned int)(where & ~3), v); v >>= shift; if (size == 1) v &= 0xff; @@ -187,7 +187,7 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn, pr_debug("%04x:%04x - Fix pass#: %08x, where: %03x, devfn: %03x\n", vendor_device & 0xffff, vendor_device >> 16, class_rev, - (unsigned) where, devfn); + (unsigned int)where, devfn); /* Check for non type-00 header */ if (cfg_type == 0) { diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c index b7a8e062fcc5..c50ff279903c 100644 --- a/drivers/pci/controller/pci-xgene-msi.c +++ b/drivers/pci/controller/pci-xgene-msi.c @@ -302,7 +302,7 @@ static void xgene_msi_isr(struct irq_desc *desc) /* * MSIINTn (n is 0..F) indicates if there is a pending MSI interrupt - * If bit x of this register is set (x is 0..7), one or more interupts + * If bit x of this register is set (x is 0..7), one or more interrupts * corresponding to MSInIRx is set. */ grp_select = xgene_msi_int_read(xgene_msi, msi_grp); diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c index e64536047b65..56d0d50338c8 100644 --- a/drivers/pci/controller/pci-xgene.c +++ b/drivers/pci/controller/pci-xgene.c @@ -48,7 +48,6 @@ #define EN_COHERENCY 0xF0000000 #define EN_REG 0x00000001 #define OB_LO_IO 0x00000002 -#define XGENE_PCIE_VENDORID 0x10E8 #define XGENE_PCIE_DEVICEID 0xE004 #define SZ_1T (SZ_1G*1024ULL) #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) @@ -560,7 +559,7 @@ static int xgene_pcie_setup(struct xgene_pcie_port *port) xgene_pcie_clear_config(port); /* setup the vendor and device IDs correctly */ - val = (XGENE_PCIE_DEVICEID << 16) | XGENE_PCIE_VENDORID; + val = (XGENE_PCIE_DEVICEID << 16) | PCI_VENDOR_ID_AMCC; xgene_pcie_writel(port, BRIDGE_CFG_0, val); ret = xgene_pcie_map_ranges(port); diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c new file mode 100644 index 000000000000..1bf4d75b61be --- /dev/null +++ b/drivers/pci/controller/pcie-apple.c @@ -0,0 +1,824 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCIe host bridge driver for Apple system-on-chips. + * + * The HW is ECAM compliant, so once the controller is initialized, + * the driver mostly deals MSI mapping and handling of per-port + * interrupts (INTx, management and error signals). + * + * Initialization requires enabling power and clocks, along with a + * number of register pokes. + * + * Copyright (C) 2021 Alyssa Rosenzweig <alyssa@rosenzweig.io> + * Copyright (C) 2021 Google LLC + * Copyright (C) 2021 Corellium LLC + * Copyright (C) 2021 Mark Kettenis <kettenis@openbsd.org> + * + * Author: Alyssa Rosenzweig <alyssa@rosenzweig.io> + * Author: Marc Zyngier <maz@kernel.org> + */ + +#include <linux/gpio/consumer.h> +#include <linux/kernel.h> +#include <linux/iopoll.h> +#include <linux/irqchip/chained_irq.h> +#include <linux/irqdomain.h> +#include <linux/list.h> +#include <linux/module.h> +#include <linux/msi.h> +#include <linux/notifier.h> +#include <linux/of_irq.h> +#include <linux/pci-ecam.h> + +#define CORE_RC_PHYIF_CTL 0x00024 +#define CORE_RC_PHYIF_CTL_RUN BIT(0) +#define CORE_RC_PHYIF_STAT 0x00028 +#define CORE_RC_PHYIF_STAT_REFCLK BIT(4) +#define CORE_RC_CTL 0x00050 +#define CORE_RC_CTL_RUN BIT(0) +#define CORE_RC_STAT 0x00058 +#define CORE_RC_STAT_READY BIT(0) +#define CORE_FABRIC_STAT 0x04000 +#define CORE_FABRIC_STAT_MASK 0x001F001F +#define CORE_LANE_CFG(port) (0x84000 + 0x4000 * (port)) +#define CORE_LANE_CFG_REFCLK0REQ BIT(0) +#define CORE_LANE_CFG_REFCLK1 BIT(1) +#define CORE_LANE_CFG_REFCLK0ACK BIT(2) +#define CORE_LANE_CFG_REFCLKEN (BIT(9) | BIT(10)) +#define CORE_LANE_CTL(port) (0x84004 + 0x4000 * (port)) +#define CORE_LANE_CTL_CFGACC BIT(15) + +#define PORT_LTSSMCTL 0x00080 +#define PORT_LTSSMCTL_START BIT(0) +#define PORT_INTSTAT 0x00100 +#define PORT_INT_TUNNEL_ERR 31 +#define PORT_INT_CPL_TIMEOUT 23 +#define PORT_INT_RID2SID_MAPERR 22 +#define PORT_INT_CPL_ABORT 21 +#define PORT_INT_MSI_BAD_DATA 19 +#define PORT_INT_MSI_ERR 18 +#define PORT_INT_REQADDR_GT32 17 +#define PORT_INT_AF_TIMEOUT 15 +#define PORT_INT_LINK_DOWN 14 +#define PORT_INT_LINK_UP 12 +#define PORT_INT_LINK_BWMGMT 11 +#define PORT_INT_AER_MASK (15 << 4) +#define PORT_INT_PORT_ERR 4 +#define PORT_INT_INTx(i) i +#define PORT_INT_INTx_MASK 15 +#define PORT_INTMSK 0x00104 +#define PORT_INTMSKSET 0x00108 +#define PORT_INTMSKCLR 0x0010c +#define PORT_MSICFG 0x00124 +#define PORT_MSICFG_EN BIT(0) +#define PORT_MSICFG_L2MSINUM_SHIFT 4 +#define PORT_MSIBASE 0x00128 +#define PORT_MSIBASE_1_SHIFT 16 +#define PORT_MSIADDR 0x00168 +#define PORT_LINKSTS 0x00208 +#define PORT_LINKSTS_UP BIT(0) +#define PORT_LINKSTS_BUSY BIT(2) +#define PORT_LINKCMDSTS 0x00210 +#define PORT_OUTS_NPREQS 0x00284 +#define PORT_OUTS_NPREQS_REQ BIT(24) +#define PORT_OUTS_NPREQS_CPL BIT(16) +#define PORT_RXWR_FIFO 0x00288 +#define PORT_RXWR_FIFO_HDR GENMASK(15, 10) +#define PORT_RXWR_FIFO_DATA GENMASK(9, 0) +#define PORT_RXRD_FIFO 0x0028C +#define PORT_RXRD_FIFO_REQ GENMASK(6, 0) +#define PORT_OUTS_CPLS 0x00290 +#define PORT_OUTS_CPLS_SHRD GENMASK(14, 8) +#define PORT_OUTS_CPLS_WAIT GENMASK(6, 0) +#define PORT_APPCLK 0x00800 +#define PORT_APPCLK_EN BIT(0) +#define PORT_APPCLK_CGDIS BIT(8) +#define PORT_STATUS 0x00804 +#define PORT_STATUS_READY BIT(0) +#define PORT_REFCLK 0x00810 +#define PORT_REFCLK_EN BIT(0) +#define PORT_REFCLK_CGDIS BIT(8) +#define PORT_PERST 0x00814 +#define PORT_PERST_OFF BIT(0) +#define PORT_RID2SID(i16) (0x00828 + 4 * (i16)) +#define PORT_RID2SID_VALID BIT(31) +#define PORT_RID2SID_SID_SHIFT 16 +#define PORT_RID2SID_BUS_SHIFT 8 +#define PORT_RID2SID_DEV_SHIFT 3 +#define PORT_RID2SID_FUNC_SHIFT 0 +#define PORT_OUTS_PREQS_HDR 0x00980 +#define PORT_OUTS_PREQS_HDR_MASK GENMASK(9, 0) +#define PORT_OUTS_PREQS_DATA 0x00984 +#define PORT_OUTS_PREQS_DATA_MASK GENMASK(15, 0) +#define PORT_TUNCTRL 0x00988 +#define PORT_TUNCTRL_PERST_ON BIT(0) +#define PORT_TUNCTRL_PERST_ACK_REQ BIT(1) +#define PORT_TUNSTAT 0x0098c +#define PORT_TUNSTAT_PERST_ON BIT(0) +#define PORT_TUNSTAT_PERST_ACK_PEND BIT(1) +#define PORT_PREFMEM_ENABLE 0x00994 + +#define MAX_RID2SID 64 + +/* + * The doorbell address is set to 0xfffff000, which by convention + * matches what MacOS does, and it is possible to use any other + * address (in the bottom 4GB, as the base register is only 32bit). + * However, it has to be excluded from the IOVA range, and the DART + * driver has to know about it. + */ +#define DOORBELL_ADDR CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR + +struct apple_pcie { + struct mutex lock; + struct device *dev; + void __iomem *base; + struct irq_domain *domain; + unsigned long *bitmap; + struct list_head ports; + struct completion event; + struct irq_fwspec fwspec; + u32 nvecs; +}; + +struct apple_pcie_port { + struct apple_pcie *pcie; + struct device_node *np; + void __iomem *base; + struct irq_domain *domain; + struct list_head entry; + DECLARE_BITMAP(sid_map, MAX_RID2SID); + int sid_map_sz; + int idx; +}; + +static void rmw_set(u32 set, void __iomem *addr) +{ + writel_relaxed(readl_relaxed(addr) | set, addr); +} + +static void rmw_clear(u32 clr, void __iomem *addr) +{ + writel_relaxed(readl_relaxed(addr) & ~clr, addr); +} + +static void apple_msi_top_irq_mask(struct irq_data *d) +{ + pci_msi_mask_irq(d); + irq_chip_mask_parent(d); +} + +static void apple_msi_top_irq_unmask(struct irq_data *d) +{ + pci_msi_unmask_irq(d); + irq_chip_unmask_parent(d); +} + +static struct irq_chip apple_msi_top_chip = { + .name = "PCIe MSI", + .irq_mask = apple_msi_top_irq_mask, + .irq_unmask = apple_msi_top_irq_unmask, + .irq_eoi = irq_chip_eoi_parent, + .irq_set_affinity = irq_chip_set_affinity_parent, + .irq_set_type = irq_chip_set_type_parent, +}; + +static void apple_msi_compose_msg(struct irq_data *data, struct msi_msg *msg) +{ + msg->address_hi = upper_32_bits(DOORBELL_ADDR); + msg->address_lo = lower_32_bits(DOORBELL_ADDR); + msg->data = data->hwirq; +} + +static struct irq_chip apple_msi_bottom_chip = { + .name = "MSI", + .irq_mask = irq_chip_mask_parent, + .irq_unmask = irq_chip_unmask_parent, + .irq_eoi = irq_chip_eoi_parent, + .irq_set_affinity = irq_chip_set_affinity_parent, + .irq_set_type = irq_chip_set_type_parent, + .irq_compose_msi_msg = apple_msi_compose_msg, +}; + +static int apple_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, + unsigned int nr_irqs, void *args) +{ + struct apple_pcie *pcie = domain->host_data; + struct irq_fwspec fwspec = pcie->fwspec; + unsigned int i; + int ret, hwirq; + + mutex_lock(&pcie->lock); + + hwirq = bitmap_find_free_region(pcie->bitmap, pcie->nvecs, + order_base_2(nr_irqs)); + + mutex_unlock(&pcie->lock); + + if (hwirq < 0) + return -ENOSPC; + + fwspec.param[1] += hwirq; + + ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &fwspec); + if (ret) + return ret; + + for (i = 0; i < nr_irqs; i++) { + irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, + &apple_msi_bottom_chip, + domain->host_data); + } + + return 0; +} + +static void apple_msi_domain_free(struct irq_domain *domain, unsigned int virq, + unsigned int nr_irqs) +{ + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + struct apple_pcie *pcie = domain->host_data; + + mutex_lock(&pcie->lock); + + bitmap_release_region(pcie->bitmap, d->hwirq, order_base_2(nr_irqs)); + + mutex_unlock(&pcie->lock); +} + +static const struct irq_domain_ops apple_msi_domain_ops = { + .alloc = apple_msi_domain_alloc, + .free = apple_msi_domain_free, +}; + +static struct msi_domain_info apple_msi_info = { + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX), + .chip = &apple_msi_top_chip, +}; + +static void apple_port_irq_mask(struct irq_data *data) +{ + struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); + + writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKSET); +} + +static void apple_port_irq_unmask(struct irq_data *data) +{ + struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); + + writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKCLR); +} + +static bool hwirq_is_intx(unsigned int hwirq) +{ + return BIT(hwirq) & PORT_INT_INTx_MASK; +} + +static void apple_port_irq_ack(struct irq_data *data) +{ + struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); + + if (!hwirq_is_intx(data->hwirq)) + writel_relaxed(BIT(data->hwirq), port->base + PORT_INTSTAT); +} + +static int apple_port_irq_set_type(struct irq_data *data, unsigned int type) +{ + /* + * It doesn't seem that there is any way to configure the + * trigger, so assume INTx have to be level (as per the spec), + * and the rest is edge (which looks likely). + */ + if (hwirq_is_intx(data->hwirq) ^ !!(type & IRQ_TYPE_LEVEL_MASK)) + return -EINVAL; + + irqd_set_trigger_type(data, type); + return 0; +} + +static struct irq_chip apple_port_irqchip = { + .name = "PCIe", + .irq_ack = apple_port_irq_ack, + .irq_mask = apple_port_irq_mask, + .irq_unmask = apple_port_irq_unmask, + .irq_set_type = apple_port_irq_set_type, +}; + +static int apple_port_irq_domain_alloc(struct irq_domain *domain, + unsigned int virq, unsigned int nr_irqs, + void *args) +{ + struct apple_pcie_port *port = domain->host_data; + struct irq_fwspec *fwspec = args; + int i; + + for (i = 0; i < nr_irqs; i++) { + irq_flow_handler_t flow = handle_edge_irq; + unsigned int type = IRQ_TYPE_EDGE_RISING; + + if (hwirq_is_intx(fwspec->param[0] + i)) { + flow = handle_level_irq; + type = IRQ_TYPE_LEVEL_HIGH; + } + + irq_domain_set_info(domain, virq + i, fwspec->param[0] + i, + &apple_port_irqchip, port, flow, + NULL, NULL); + + irq_set_irq_type(virq + i, type); + } + + return 0; +} + +static void apple_port_irq_domain_free(struct irq_domain *domain, + unsigned int virq, unsigned int nr_irqs) +{ + int i; + + for (i = 0; i < nr_irqs; i++) { + struct irq_data *d = irq_domain_get_irq_data(domain, virq + i); + + irq_set_handler(virq + i, NULL); + irq_domain_reset_irq_data(d); + } +} + +static const struct irq_domain_ops apple_port_irq_domain_ops = { + .translate = irq_domain_translate_onecell, + .alloc = apple_port_irq_domain_alloc, + .free = apple_port_irq_domain_free, +}; + +static void apple_port_irq_handler(struct irq_desc *desc) +{ + struct apple_pcie_port *port = irq_desc_get_handler_data(desc); + struct irq_chip *chip = irq_desc_get_chip(desc); + unsigned long stat; + int i; + + chained_irq_enter(chip, desc); + + stat = readl_relaxed(port->base + PORT_INTSTAT); + + for_each_set_bit(i, &stat, 32) + generic_handle_domain_irq(port->domain, i); + + chained_irq_exit(chip, desc); +} + +static int apple_pcie_port_setup_irq(struct apple_pcie_port *port) +{ + struct fwnode_handle *fwnode = &port->np->fwnode; + unsigned int irq; + + /* FIXME: consider moving each interrupt under each port */ + irq = irq_of_parse_and_map(to_of_node(dev_fwnode(port->pcie->dev)), + port->idx); + if (!irq) + return -ENXIO; + + port->domain = irq_domain_create_linear(fwnode, 32, + &apple_port_irq_domain_ops, + port); + if (!port->domain) + return -ENOMEM; + + /* Disable all interrupts */ + writel_relaxed(~0, port->base + PORT_INTMSKSET); + writel_relaxed(~0, port->base + PORT_INTSTAT); + + irq_set_chained_handler_and_data(irq, apple_port_irq_handler, port); + + /* Configure MSI base address */ + BUILD_BUG_ON(upper_32_bits(DOORBELL_ADDR)); + writel_relaxed(lower_32_bits(DOORBELL_ADDR), port->base + PORT_MSIADDR); + + /* Enable MSIs, shared between all ports */ + writel_relaxed(0, port->base + PORT_MSIBASE); + writel_relaxed((ilog2(port->pcie->nvecs) << PORT_MSICFG_L2MSINUM_SHIFT) | + PORT_MSICFG_EN, port->base + PORT_MSICFG); + + return 0; +} + +static irqreturn_t apple_pcie_port_irq(int irq, void *data) +{ + struct apple_pcie_port *port = data; + unsigned int hwirq = irq_domain_get_irq_data(port->domain, irq)->hwirq; + + switch (hwirq) { + case PORT_INT_LINK_UP: + dev_info_ratelimited(port->pcie->dev, "Link up on %pOF\n", + port->np); + complete_all(&port->pcie->event); + break; + case PORT_INT_LINK_DOWN: + dev_info_ratelimited(port->pcie->dev, "Link down on %pOF\n", + port->np); + break; + default: + return IRQ_NONE; + } + + return IRQ_HANDLED; +} + +static int apple_pcie_port_register_irqs(struct apple_pcie_port *port) +{ + static struct { + unsigned int hwirq; + const char *name; + } port_irqs[] = { + { PORT_INT_LINK_UP, "Link up", }, + { PORT_INT_LINK_DOWN, "Link down", }, + }; + int i; + + for (i = 0; i < ARRAY_SIZE(port_irqs); i++) { + struct irq_fwspec fwspec = { + .fwnode = &port->np->fwnode, + .param_count = 1, + .param = { + [0] = port_irqs[i].hwirq, + }, + }; + unsigned int irq; + int ret; + + irq = irq_domain_alloc_irqs(port->domain, 1, NUMA_NO_NODE, + &fwspec); + if (WARN_ON(!irq)) + continue; + + ret = request_irq(irq, apple_pcie_port_irq, 0, + port_irqs[i].name, port); + WARN_ON(ret); + } + + return 0; +} + +static int apple_pcie_setup_refclk(struct apple_pcie *pcie, + struct apple_pcie_port *port) +{ + u32 stat; + int res; + + res = readl_relaxed_poll_timeout(pcie->base + CORE_RC_PHYIF_STAT, stat, + stat & CORE_RC_PHYIF_STAT_REFCLK, + 100, 50000); + if (res < 0) + return res; + + rmw_set(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx)); + rmw_set(CORE_LANE_CFG_REFCLK0REQ, pcie->base + CORE_LANE_CFG(port->idx)); + + res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx), + stat, stat & CORE_LANE_CFG_REFCLK0ACK, + 100, 50000); + if (res < 0) + return res; + + rmw_set(CORE_LANE_CFG_REFCLK1, pcie->base + CORE_LANE_CFG(port->idx)); + res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx), + stat, stat & CORE_LANE_CFG_REFCLK1, + 100, 50000); + + if (res < 0) + return res; + + rmw_clear(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx)); + + rmw_set(CORE_LANE_CFG_REFCLKEN, pcie->base + CORE_LANE_CFG(port->idx)); + rmw_set(PORT_REFCLK_EN, port->base + PORT_REFCLK); + + return 0; +} + +static u32 apple_pcie_rid2sid_write(struct apple_pcie_port *port, + int idx, u32 val) +{ + writel_relaxed(val, port->base + PORT_RID2SID(idx)); + /* Read back to ensure completion of the write */ + return readl_relaxed(port->base + PORT_RID2SID(idx)); +} + +static int apple_pcie_setup_port(struct apple_pcie *pcie, + struct device_node *np) +{ + struct platform_device *platform = to_platform_device(pcie->dev); + struct apple_pcie_port *port; + struct gpio_desc *reset; + u32 stat, idx; + int ret, i; + + reset = gpiod_get_from_of_node(np, "reset-gpios", 0, + GPIOD_OUT_LOW, "#PERST"); + if (IS_ERR(reset)) + return PTR_ERR(reset); + + port = devm_kzalloc(pcie->dev, sizeof(*port), GFP_KERNEL); + if (!port) + return -ENOMEM; + + ret = of_property_read_u32_index(np, "reg", 0, &idx); + if (ret) + return ret; + + /* Use the first reg entry to work out the port index */ + port->idx = idx >> 11; + port->pcie = pcie; + port->np = np; + + port->base = devm_platform_ioremap_resource(platform, port->idx + 2); + if (IS_ERR(port->base)) + return PTR_ERR(port->base); + + rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK); + + ret = apple_pcie_setup_refclk(pcie, port); + if (ret < 0) + return ret; + + rmw_set(PORT_PERST_OFF, port->base + PORT_PERST); + gpiod_set_value(reset, 1); + + ret = readl_relaxed_poll_timeout(port->base + PORT_STATUS, stat, + stat & PORT_STATUS_READY, 100, 250000); + if (ret < 0) { + dev_err(pcie->dev, "port %pOF ready wait timeout\n", np); + return ret; + } + + ret = apple_pcie_port_setup_irq(port); + if (ret) + return ret; + + /* Reset all RID/SID mappings, and check for RAZ/WI registers */ + for (i = 0; i < MAX_RID2SID; i++) { + if (apple_pcie_rid2sid_write(port, i, 0xbad1d) != 0xbad1d) + break; + apple_pcie_rid2sid_write(port, i, 0); + } + + dev_dbg(pcie->dev, "%pOF: %d RID/SID mapping entries\n", np, i); + + port->sid_map_sz = i; + + list_add_tail(&port->entry, &pcie->ports); + init_completion(&pcie->event); + + ret = apple_pcie_port_register_irqs(port); + WARN_ON(ret); + + writel_relaxed(PORT_LTSSMCTL_START, port->base + PORT_LTSSMCTL); + + if (!wait_for_completion_timeout(&pcie->event, HZ / 10)) + dev_warn(pcie->dev, "%pOF link didn't come up\n", np); + + return 0; +} + +static int apple_msi_init(struct apple_pcie *pcie) +{ + struct fwnode_handle *fwnode = dev_fwnode(pcie->dev); + struct of_phandle_args args = {}; + struct irq_domain *parent; + int ret; + + ret = of_parse_phandle_with_args(to_of_node(fwnode), "msi-ranges", + "#interrupt-cells", 0, &args); + if (ret) + return ret; + + ret = of_property_read_u32_index(to_of_node(fwnode), "msi-ranges", + args.args_count + 1, &pcie->nvecs); + if (ret) + return ret; + + of_phandle_args_to_fwspec(args.np, args.args, args.args_count, + &pcie->fwspec); + + pcie->bitmap = devm_bitmap_zalloc(pcie->dev, pcie->nvecs, GFP_KERNEL); + if (!pcie->bitmap) + return -ENOMEM; + + parent = irq_find_matching_fwspec(&pcie->fwspec, DOMAIN_BUS_WIRED); + if (!parent) { + dev_err(pcie->dev, "failed to find parent domain\n"); + return -ENXIO; + } + + parent = irq_domain_create_hierarchy(parent, 0, pcie->nvecs, fwnode, + &apple_msi_domain_ops, pcie); + if (!parent) { + dev_err(pcie->dev, "failed to create IRQ domain\n"); + return -ENOMEM; + } + irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS); + + pcie->domain = pci_msi_create_irq_domain(fwnode, &apple_msi_info, + parent); + if (!pcie->domain) { + dev_err(pcie->dev, "failed to create MSI domain\n"); + irq_domain_remove(parent); + return -ENOMEM; + } + + return 0; +} + +static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev) +{ + struct pci_config_window *cfg = pdev->sysdata; + struct apple_pcie *pcie = cfg->priv; + struct pci_dev *port_pdev; + struct apple_pcie_port *port; + + /* Find the root port this device is on */ + port_pdev = pcie_find_root_port(pdev); + + /* If finding the port itself, nothing to do */ + if (WARN_ON(!port_pdev) || pdev == port_pdev) + return NULL; + + list_for_each_entry(port, &pcie->ports, entry) { + if (port->idx == PCI_SLOT(port_pdev->devfn)) + return port; + } + + return NULL; +} + +static int apple_pcie_add_device(struct apple_pcie_port *port, + struct pci_dev *pdev) +{ + u32 sid, rid = PCI_DEVID(pdev->bus->number, pdev->devfn); + int idx, err; + + dev_dbg(&pdev->dev, "added to bus %s, index %d\n", + pci_name(pdev->bus->self), port->idx); + + err = of_map_id(port->pcie->dev->of_node, rid, "iommu-map", + "iommu-map-mask", NULL, &sid); + if (err) + return err; + + mutex_lock(&port->pcie->lock); + + idx = bitmap_find_free_region(port->sid_map, port->sid_map_sz, 0); + if (idx >= 0) { + apple_pcie_rid2sid_write(port, idx, + PORT_RID2SID_VALID | + (sid << PORT_RID2SID_SID_SHIFT) | rid); + + dev_dbg(&pdev->dev, "mapping RID%x to SID%x (index %d)\n", + rid, sid, idx); + } + + mutex_unlock(&port->pcie->lock); + + return idx >= 0 ? 0 : -ENOSPC; +} + +static void apple_pcie_release_device(struct apple_pcie_port *port, + struct pci_dev *pdev) +{ + u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); + int idx; + + mutex_lock(&port->pcie->lock); + + for_each_set_bit(idx, port->sid_map, port->sid_map_sz) { + u32 val; + + val = readl_relaxed(port->base + PORT_RID2SID(idx)); + if ((val & 0xffff) == rid) { + apple_pcie_rid2sid_write(port, idx, 0); + bitmap_release_region(port->sid_map, idx, 0); + dev_dbg(&pdev->dev, "Released %x (%d)\n", val, idx); + break; + } + } + + mutex_unlock(&port->pcie->lock); +} + +static int apple_pcie_bus_notifier(struct notifier_block *nb, + unsigned long action, + void *data) +{ + struct device *dev = data; + struct pci_dev *pdev = to_pci_dev(dev); + struct apple_pcie_port *port; + int err; + + /* + * This is a bit ugly. We assume that if we get notified for + * any PCI device, we must be in charge of it, and that there + * is no other PCI controller in the whole system. It probably + * holds for now, but who knows for how long? + */ + port = apple_pcie_get_port(pdev); + if (!port) + return NOTIFY_DONE; + + switch (action) { + case BUS_NOTIFY_ADD_DEVICE: + err = apple_pcie_add_device(port, pdev); + if (err) + return notifier_from_errno(err); + break; + case BUS_NOTIFY_DEL_DEVICE: + apple_pcie_release_device(port, pdev); + break; + default: + return NOTIFY_DONE; + } + + return NOTIFY_OK; +} + +static struct notifier_block apple_pcie_nb = { + .notifier_call = apple_pcie_bus_notifier, +}; + +static int apple_pcie_init(struct pci_config_window *cfg) +{ + struct device *dev = cfg->parent; + struct platform_device *platform = to_platform_device(dev); + struct device_node *of_port; + struct apple_pcie *pcie; + int ret; + + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); + if (!pcie) + return -ENOMEM; + + pcie->dev = dev; + + mutex_init(&pcie->lock); + + pcie->base = devm_platform_ioremap_resource(platform, 1); + if (IS_ERR(pcie->base)) + return PTR_ERR(pcie->base); + + cfg->priv = pcie; + INIT_LIST_HEAD(&pcie->ports); + + for_each_child_of_node(dev->of_node, of_port) { + ret = apple_pcie_setup_port(pcie, of_port); + if (ret) { + dev_err(pcie->dev, "Port %pOF setup fail: %d\n", of_port, ret); + of_node_put(of_port); + return ret; + } + } + + return apple_msi_init(pcie); +} + +static int apple_pcie_probe(struct platform_device *pdev) +{ + int ret; + + ret = bus_register_notifier(&pci_bus_type, &apple_pcie_nb); + if (ret) + return ret; + + ret = pci_host_common_probe(pdev); + if (ret) + bus_unregister_notifier(&pci_bus_type, &apple_pcie_nb); + + return ret; +} + +static const struct pci_ecam_ops apple_pcie_cfg_ecam_ops = { + .init = apple_pcie_init, + .pci_ops = { + .map_bus = pci_ecam_map_bus, + .read = pci_generic_config_read, + .write = pci_generic_config_write, + } +}; + +static const struct of_device_id apple_pcie_of_match[] = { + { .compatible = "apple,pcie", .data = &apple_pcie_cfg_ecam_ops }, + { } +}; +MODULE_DEVICE_TABLE(of, apple_pcie_of_match); + +static struct platform_driver apple_pcie_driver = { + .probe = apple_pcie_probe, + .driver = { + .name = "pcie-apple", + .of_match_table = apple_pcie_of_match, + .suppress_bind_attrs = true, + }, +}; +module_platform_driver(apple_pcie_driver); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c index cc30215f5a43..1fc7bd49a7ad 100644 --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c @@ -145,7 +145,7 @@ #define BRCM_INT_PCI_MSI_LEGACY_NR 8 #define BRCM_INT_PCI_MSI_SHIFT 0 -/* MSI target adresses */ +/* MSI target addresses */ #define BRCM_MSI_TARGET_ADDR_LT_4GB 0x0fffffffcULL #define BRCM_MSI_TARGET_ADDR_GT_4GB 0xffffffffcULL diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c index 30ac5fbefbbf..36b9d2c46cfa 100644 --- a/drivers/pci/controller/pcie-iproc.c +++ b/drivers/pci/controller/pcie-iproc.c @@ -249,7 +249,7 @@ enum iproc_pcie_reg { /* * To hold the address of the register where the MSI writes are - * programed. When ARM GICv3 ITS is used, this should be programmed + * programmed. When ARM GICv3 ITS is used, this should be programmed * with the address of the GITS_TRANSLATER register. */ IPROC_PCIE_MSI_ADDR_LO, diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/pci/controller/pcie-mt7621.c index 503cb1fca2e0..b60dfb45ef7b 100644 --- a/drivers/staging/mt7621-pci/pci-mt7621.c +++ b/drivers/pci/controller/pcie-mt7621.c @@ -30,18 +30,18 @@ #include <linux/reset.h> #include <linux/sys_soc.h> -/* MediaTek specific configuration registers */ +/* MediaTek-specific configuration registers */ #define PCIE_FTS_NUM 0x70c #define PCIE_FTS_NUM_MASK GENMASK(15, 8) #define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8) /* Host-PCI bridge registers */ #define RALINK_PCI_PCICFG_ADDR 0x0000 -#define RALINK_PCI_PCIMSK_ADDR 0x000C +#define RALINK_PCI_PCIMSK_ADDR 0x000c #define RALINK_PCI_CONFIG_ADDR 0x0020 #define RALINK_PCI_CONFIG_DATA 0x0024 #define RALINK_PCI_MEMBASE 0x0028 -#define RALINK_PCI_IOBASE 0x002C +#define RALINK_PCI_IOBASE 0x002c /* PCIe RC control registers */ #define RALINK_PCI_ID 0x0030 @@ -132,7 +132,7 @@ static inline void pcie_port_write(struct mt7621_pcie_port *port, static inline u32 mt7621_pci_get_cfgaddr(unsigned int bus, unsigned int slot, unsigned int func, unsigned int where) { - return (((where & 0xF00) >> 8) << 24) | (bus << 16) | (slot << 11) | + return (((where & 0xf00) >> 8) << 24) | (bus << 16) | (slot << 11) | (func << 8) | (where & 0xfc) | 0x80000000; } @@ -217,7 +217,7 @@ static int setup_cm_memory_region(struct pci_host_bridge *host) entry = resource_list_first_type(&host->windows, IORESOURCE_MEM); if (!entry) { - dev_err(dev, "Cannot get memory resource\n"); + dev_err(dev, "cannot get memory resource\n"); return -EINVAL; } @@ -280,7 +280,7 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie, port->gpio_rst = devm_gpiod_get_index_optional(dev, "reset", slot, GPIOD_OUT_LOW); if (IS_ERR(port->gpio_rst)) { - dev_err(dev, "Failed to get GPIO for PCIe%d\n", slot); + dev_err(dev, "failed to get GPIO for PCIe%d\n", slot); err = PTR_ERR(port->gpio_rst); goto remove_reset; } @@ -409,7 +409,7 @@ static int mt7621_pcie_init_ports(struct mt7621_pcie *pcie) err = mt7621_pcie_init_port(port); if (err) { - dev_err(dev, "Initiating port %d failed\n", slot); + dev_err(dev, "initializing port %d failed\n", slot); list_del(&port->list); } } @@ -476,7 +476,7 @@ static int mt7621_pcie_enable_ports(struct pci_host_bridge *host) entry = resource_list_first_type(&host->windows, IORESOURCE_IO); if (!entry) { - dev_err(dev, "Cannot get io resource\n"); + dev_err(dev, "cannot get io resource\n"); return -EINVAL; } @@ -541,25 +541,25 @@ static int mt7621_pci_probe(struct platform_device *pdev) err = mt7621_pcie_parse_dt(pcie); if (err) { - dev_err(dev, "Parsing DT failed\n"); + dev_err(dev, "parsing DT failed\n"); return err; } err = mt7621_pcie_init_ports(pcie); if (err) { - dev_err(dev, "Nothing connected in virtual bridges\n"); + dev_err(dev, "nothing connected in virtual bridges\n"); return 0; } err = mt7621_pcie_enable_ports(bridge); if (err) { - dev_err(dev, "Error enabling pcie ports\n"); + dev_err(dev, "error enabling pcie ports\n"); goto remove_resets; } err = setup_cm_memory_region(bridge); if (err) { - dev_err(dev, "Error setting up iocu mem regions\n"); + dev_err(dev, "error setting up iocu mem regions\n"); goto remove_resets; } diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c index aa1cf24a5a72..f9682df1da61 100644 --- a/drivers/pci/controller/pcie-rcar-ep.c +++ b/drivers/pci/controller/pcie-rcar-ep.c @@ -6,16 +6,13 @@ * Author: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> */ -#include <linux/clk.h> #include <linux/delay.h> #include <linux/of_address.h> -#include <linux/of_irq.h> -#include <linux/of_pci.h> #include <linux/of_platform.h> #include <linux/pci.h> #include <linux/pci-epc.h> -#include <linux/phy/phy.h> #include <linux/platform_device.h> +#include <linux/pm_runtime.h> #include "pcie-rcar.h" diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c index 8f3131844e77..e12c2d8be05a 100644 --- a/drivers/pci/controller/pcie-rcar-host.c +++ b/drivers/pci/controller/pcie-rcar-host.c @@ -24,13 +24,11 @@ #include <linux/msi.h> #include <linux/of_address.h> #include <linux/of_irq.h> -#include <linux/of_pci.h> #include <linux/of_platform.h> #include <linux/pci.h> #include <linux/phy/phy.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> -#include <linux/slab.h> #include "pcie-rcar.h" diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index a5987e52700e..a45e8e59d3d4 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -6,6 +6,7 @@ #include <linux/device.h> #include <linux/interrupt.h> +#include <linux/iommu.h> #include <linux/irq.h> #include <linux/kernel.h> #include <linux/module.h> @@ -18,8 +19,6 @@ #include <linux/rcupdate.h> #include <asm/irqdomain.h> -#include <asm/device.h> -#include <asm/msi.h> #define VMD_CFGBAR 0 #define VMD_MEMBAR1 2 @@ -70,6 +69,8 @@ enum vmd_features { VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4), }; +static DEFINE_IDA(vmd_instance_ida); + /* * Lock for manipulating VMD IRQ lists. */ @@ -120,6 +121,8 @@ struct vmd_dev { struct pci_bus *bus; u8 busn_start; u8 first_vec; + char *name; + int instance; }; static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus) @@ -650,7 +653,7 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd) INIT_LIST_HEAD(&vmd->irqs[i].irq_list); err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), vmd_irq, IRQF_NO_THREAD, - "vmd", &vmd->irqs[i]); + vmd->name, &vmd->irqs[i]); if (err) return err; } @@ -761,7 +764,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) * acceptable because the guest is usually CPU-limited and MSI * remapping doesn't become a performance bottleneck. */ - if (!(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) || + if (iommu_capable(vmd->dev->dev.bus, IOMMU_CAP_INTR_REMAP) || + !(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) || offset[0] || offset[1]) { ret = vmd_alloc_irqs(vmd); if (ret) @@ -834,18 +838,32 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id) return -ENOMEM; vmd->dev = dev; + vmd->instance = ida_simple_get(&vmd_instance_ida, 0, 0, GFP_KERNEL); + if (vmd->instance < 0) + return vmd->instance; + + vmd->name = kasprintf(GFP_KERNEL, "vmd%d", vmd->instance); + if (!vmd->name) { + err = -ENOMEM; + goto out_release_instance; + } + err = pcim_enable_device(dev); if (err < 0) - return err; + goto out_release_instance; vmd->cfgbar = pcim_iomap(dev, VMD_CFGBAR, 0); - if (!vmd->cfgbar) - return -ENOMEM; + if (!vmd->cfgbar) { + err = -ENOMEM; + goto out_release_instance; + } pci_set_master(dev); if (dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)) && - dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) - return -ENODEV; + dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) { + err = -ENODEV; + goto out_release_instance; + } if (features & VMD_FEAT_OFFSET_FIRST_VECTOR) vmd->first_vec = 1; @@ -854,11 +872,16 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id) pci_set_drvdata(dev, vmd); err = vmd_enable_domain(vmd, features); if (err) - return err; + goto out_release_instance; dev_info(&vmd->dev->dev, "Bound to PCI domain %04x\n", vmd->sysdata.domain); return 0; + + out_release_instance: + ida_simple_remove(&vmd_instance_ida, vmd->instance); + kfree(vmd->name); + return err; } static void vmd_cleanup_srcu(struct vmd_dev *vmd) @@ -879,6 +902,8 @@ static void vmd_remove(struct pci_dev *dev) vmd_cleanup_srcu(vmd); vmd_detach_resources(vmd); vmd_remove_irq_domain(vmd); + ida_simple_remove(&vmd_instance_ida, vmd->instance); + kfree(vmd->name); } #ifdef CONFIG_PM_SLEEP @@ -903,7 +928,7 @@ static int vmd_resume(struct device *dev) for (i = 0; i < vmd->msix_count; i++) { err = devm_request_irq(dev, pci_irq_vector(pdev, i), vmd_irq, IRQF_NO_THREAD, - "vmd", &vmd->irqs[i]); + vmd->name, &vmd->irqs[i]); if (err) return err; } diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/endpoint/functions/pci-epf-ntb.c index 8b4756159f15..5a03401f4571 100644 --- a/drivers/pci/endpoint/functions/pci-epf-ntb.c +++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c @@ -1937,7 +1937,7 @@ static ssize_t epf_ntb_##_name##_show(struct config_item *item, \ struct config_group *group = to_config_group(item); \ struct epf_ntb *ntb = to_epf_ntb(group); \ \ - return sprintf(page, "%d\n", ntb->_name); \ + return sysfs_emit(page, "%d\n", ntb->_name); \ } #define EPF_NTB_W(_name) \ @@ -1947,11 +1947,9 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \ struct config_group *group = to_config_group(item); \ struct epf_ntb *ntb = to_epf_ntb(group); \ u32 val; \ - int ret; \ \ - ret = kstrtou32(page, 0, &val); \ - if (ret) \ - return ret; \ + if (kstrtou32(page, 0, &val) < 0) \ + return -EINVAL; \ \ ntb->_name = val; \ \ @@ -1968,7 +1966,7 @@ static ssize_t epf_ntb_##_name##_show(struct config_item *item, \ \ sscanf(#_name, "mw%d", &win_no); \ \ - return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \ + return sysfs_emit(page, "%lld\n", ntb->mws_size[win_no - 1]); \ } #define EPF_NTB_MW_W(_name) \ @@ -1980,11 +1978,9 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \ struct device *dev = &ntb->epf->dev; \ int win_no; \ u64 val; \ - int ret; \ \ - ret = kstrtou64(page, 0, &val); \ - if (ret) \ - return ret; \ + if (kstrtou64(page, 0, &val) < 0) \ + return -EINVAL; \ \ if (sscanf(#_name, "mw%d", &win_no) != 1) \ return -EINVAL; \ @@ -2005,11 +2001,9 @@ static ssize_t epf_ntb_num_mws_store(struct config_item *item, struct config_group *group = to_config_group(item); struct epf_ntb *ntb = to_epf_ntb(group); u32 val; - int ret; - ret = kstrtou32(page, 0, &val); - if (ret) - return ret; + if (kstrtou32(page, 0, &val) < 0) + return -EINVAL; if (val > MAX_MW) return -EINVAL; diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c index 999911801877..d4850bdd837f 100644 --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c @@ -175,9 +175,8 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page, epc = epc_group->epc; - ret = kstrtobool(page, &start); - if (ret) - return ret; + if (kstrtobool(page, &start) < 0) + return -EINVAL; if (!start) { pci_epc_stop(epc); @@ -198,8 +197,7 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page, static ssize_t pci_epc_start_show(struct config_item *item, char *page) { - return sprintf(page, "%d\n", - to_pci_epc_group(item)->start); + return sysfs_emit(page, "%d\n", to_pci_epc_group(item)->start); } CONFIGFS_ATTR(pci_epc_, start); @@ -321,7 +319,7 @@ static ssize_t pci_epf_##_name##_show(struct config_item *item, char *page) \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \ if (WARN_ON_ONCE(!epf->header)) \ return -EINVAL; \ - return sprintf(page, "0x%04x\n", epf->header->_name); \ + return sysfs_emit(page, "0x%04x\n", epf->header->_name); \ } #define PCI_EPF_HEADER_W_u32(_name) \ @@ -329,13 +327,11 @@ static ssize_t pci_epf_##_name##_store(struct config_item *item, \ const char *page, size_t len) \ { \ u32 val; \ - int ret; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \ if (WARN_ON_ONCE(!epf->header)) \ return -EINVAL; \ - ret = kstrtou32(page, 0, &val); \ - if (ret) \ - return ret; \ + if (kstrtou32(page, 0, &val) < 0) \ + return -EINVAL; \ epf->header->_name = val; \ return len; \ } @@ -345,13 +341,11 @@ static ssize_t pci_epf_##_name##_store(struct config_item *item, \ const char *page, size_t len) \ { \ u16 val; \ - int ret; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \ if (WARN_ON_ONCE(!epf->header)) \ return -EINVAL; \ - ret = kstrtou16(page, 0, &val); \ - if (ret) \ - return ret; \ + if (kstrtou16(page, 0, &val) < 0) \ + return -EINVAL; \ epf->header->_name = val; \ return len; \ } @@ -361,13 +355,11 @@ static ssize_t pci_epf_##_name##_store(struct config_item *item, \ const char *page, size_t len) \ { \ u8 val; \ - int ret; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \ if (WARN_ON_ONCE(!epf->header)) \ return -EINVAL; \ - ret = kstrtou8(page, 0, &val); \ - if (ret) \ - return ret; \ + if (kstrtou8(page, 0, &val) < 0) \ + return -EINVAL; \ epf->header->_name = val; \ return len; \ } @@ -376,11 +368,9 @@ static ssize_t pci_epf_msi_interrupts_store(struct config_item *item, const char *page, size_t len) { u8 val; - int ret; - ret = kstrtou8(page, 0, &val); - if (ret) - return ret; + if (kstrtou8(page, 0, &val) < 0) + return -EINVAL; to_pci_epf_group(item)->epf->msi_interrupts = val; @@ -390,19 +380,17 @@ static ssize_t pci_epf_msi_interrupts_store(struct config_item *item, static ssize_t pci_epf_msi_interrupts_show(struct config_item *item, char *page) { - return sprintf(page, "%d\n", - to_pci_epf_group(item)->epf->msi_interrupts); + return sysfs_emit(page, "%d\n", + to_pci_epf_group(item)->epf->msi_interrupts); } static ssize_t pci_epf_msix_interrupts_store(struct config_item *item, const char *page, size_t len) { u16 val; - int ret; - ret = kstrtou16(page, 0, &val); - if (ret) - return ret; + if (kstrtou16(page, 0, &val) < 0) + return -EINVAL; to_pci_epf_group(item)->epf->msix_interrupts = val; @@ -412,8 +400,8 @@ static ssize_t pci_epf_msix_interrupts_store(struct config_item *item, static ssize_t pci_epf_msix_interrupts_show(struct config_item *item, char *page) { - return sprintf(page, "%d\n", - to_pci_epf_group(item)->epf->msix_interrupts); + return sysfs_emit(page, "%d\n", + to_pci_epf_group(item)->epf->msix_interrupts); } PCI_EPF_HEADER_R(vendorid) diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c index ecbb0fb3b653..38621558d397 100644 --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c @@ -700,7 +700,7 @@ EXPORT_SYMBOL_GPL(pci_epc_linkup); /** * pci_epc_init_notify() - Notify the EPF device that EPC device's core * initialization is completed. - * @epc: the EPC device whose core initialization is completeds + * @epc: the EPC device whose core initialization is completed * * Invoke to Notify the EPF device that the EPC device's initialization * is completed. diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c index 8aea16380870..9ed556936f48 100644 --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c @@ -224,7 +224,7 @@ EXPORT_SYMBOL_GPL(pci_epf_add_vepf); * be removed * @epf_vf: the virtual EP function to be removed * - * Invoke to remove a virtual endpoint function from the physcial endpoint + * Invoke to remove a virtual endpoint function from the physical endpoint * function. */ void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf) @@ -432,7 +432,7 @@ EXPORT_SYMBOL_GPL(pci_epf_destroy); /** * pci_epf_create() - create a new PCI EPF device * @name: the name of the PCI EPF device. This name will be used to bind the - * the EPF device to a EPF driver + * EPF device to a EPF driver * * Invoke to create a new PCI EPF device by providing the name of the function * device. diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index f031302ad401..12f4b351be67 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -22,7 +22,7 @@ * when the bridge is scanned and it loses a refcount when the bridge * is removed. * - When a P2P bridge is present, we elevate the refcount on the subordinate - * bus. It loses the refcount when the the driver unloads. + * bus. It loses the refcount when the driver unloads. */ #define pr_fmt(fmt) "acpiphp_glue: " fmt diff --git a/drivers/pci/hotplug/cpqphp.h b/drivers/pci/hotplug/cpqphp.h index 77e4e0142fbc..2f7b49ea96e2 100644 --- a/drivers/pci/hotplug/cpqphp.h +++ b/drivers/pci/hotplug/cpqphp.h @@ -15,7 +15,7 @@ #define _CPQPHP_H #include <linux/interrupt.h> -#include <asm/io.h> /* for read? and write? functions */ +#include <linux/io.h> /* for read? and write? functions */ #include <linux/delay.h> /* for delays */ #include <linux/mutex.h> #include <linux/sched/signal.h> /* for signal_pending() */ diff --git a/drivers/pci/hotplug/cpqphp_ctrl.c b/drivers/pci/hotplug/cpqphp_ctrl.c index 1b26ca0b3701..ed7b58eb64d2 100644 --- a/drivers/pci/hotplug/cpqphp_ctrl.c +++ b/drivers/pci/hotplug/cpqphp_ctrl.c @@ -519,7 +519,7 @@ error: * @head: list to search * @size: size of node to find, must be a power of two. * - * Description: This function sorts the resource list by size and then returns + * Description: This function sorts the resource list by size and then * returns the first node of "size" length that is not in the ISA aliasing * window. If it finds a node larger than "size" it will split it up. */ @@ -1202,7 +1202,7 @@ static u8 set_controller_speed(struct controller *ctrl, u8 adapter_speed, u8 hp_ mdelay(5); - /* Reenable interrupts */ + /* Re-enable interrupts */ writel(0, ctrl->hpc_reg + INT_MASK); pci_write_config_byte(ctrl->pci_dev, 0x41, reg); diff --git a/drivers/pci/hotplug/cpqphp_pci.c b/drivers/pci/hotplug/cpqphp_pci.c index 1b2b3f3b648b..9038039ad6db 100644 --- a/drivers/pci/hotplug/cpqphp_pci.c +++ b/drivers/pci/hotplug/cpqphp_pci.c @@ -189,8 +189,10 @@ int cpqhp_set_irq(u8 bus_num, u8 dev_num, u8 int_pin, u8 irq_num) /* This should only be for x86 as it sets the Edge Level * Control Register */ - outb((u8) (temp_word & 0xFF), 0x4d0); outb((u8) ((temp_word & - 0xFF00) >> 8), 0x4d1); rc = 0; } + outb((u8)(temp_word & 0xFF), 0x4d0); + outb((u8)((temp_word & 0xFF00) >> 8), 0x4d1); + rc = 0; + } return rc; } diff --git a/drivers/pci/hotplug/ibmphp.h b/drivers/pci/hotplug/ibmphp.h index e90a4ebf6550..0399c60d2ec1 100644 --- a/drivers/pci/hotplug/ibmphp.h +++ b/drivers/pci/hotplug/ibmphp.h @@ -352,7 +352,7 @@ struct resource_node { u32 len; int type; /* MEM, IO, PFMEM */ u8 fromMem; /* this is to indicate that the range is from - * from the Memory bucket rather than from PFMem */ + * the Memory bucket rather than from PFMem */ struct resource_node *next; struct resource_node *nextRange; /* for the other mem range on bus */ }; @@ -736,7 +736,7 @@ struct controller { int ibmphp_init_devno(struct slot **); /* This function is called from EBDA, so we need it not be static */ int ibmphp_do_disable_slot(struct slot *slot_cur); -int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */ +int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be static */ int ibmphp_configure_card(struct pci_func *, u8); int ibmphp_unconfigure_card(struct slot **, int); extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops; diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index 69fd401691be..918dccbc74b6 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h @@ -189,6 +189,8 @@ int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status); int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status); +int pciehp_slot_reset(struct pcie_device *dev); + static inline const char *slot_name(struct controller *ctrl) { return hotplug_slot_name(&ctrl->hotplug_slot); diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c index ad3393930ecb..f34114d45259 100644 --- a/drivers/pci/hotplug/pciehp_core.c +++ b/drivers/pci/hotplug/pciehp_core.c @@ -351,6 +351,8 @@ static struct pcie_port_service_driver hpdriver_portdrv = { .runtime_suspend = pciehp_runtime_suspend, .runtime_resume = pciehp_runtime_resume, #endif /* PM */ + + .slot_reset = pciehp_slot_reset, }; int __init pcie_hp_init(void) diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index 3024d7e85e6a..83a0fa119cae 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -862,6 +862,32 @@ void pcie_disable_interrupt(struct controller *ctrl) pcie_write_cmd(ctrl, 0, mask); } +/** + * pciehp_slot_reset() - ignore link event caused by error-induced hot reset + * @dev: PCI Express port service device + * + * Called from pcie_portdrv_slot_reset() after AER or DPC initiated a reset + * further up in the hierarchy to recover from an error. The reset was + * propagated down to this hotplug port. Ignore the resulting link flap. + * If the link failed to retrain successfully, synthesize the ignored event. + * Surprise removal during reset is detected through Presence Detect Changed. + */ +int pciehp_slot_reset(struct pcie_device *dev) +{ + struct controller *ctrl = get_service_data(dev); + + if (ctrl->state != ON_STATE) + return 0; + + pcie_capability_write_word(dev->port, PCI_EXP_SLTSTA, + PCI_EXP_SLTSTA_DLLSC); + + if (!pciehp_check_link_active(ctrl)) + pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC); + + return 0; +} + /* * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary * bus reset of the bridge, but at the same time we want to ensure that it is diff --git a/drivers/pci/hotplug/shpchp_hpc.c b/drivers/pci/hotplug/shpchp_hpc.c index 9e3b27744305..bd7557ca4910 100644 --- a/drivers/pci/hotplug/shpchp_hpc.c +++ b/drivers/pci/hotplug/shpchp_hpc.c @@ -295,7 +295,7 @@ static int shpc_write_cmd(struct slot *slot, u8 t_slot, u8 cmd) mutex_lock(&slot->ctrl->cmd_lock); if (!shpc_poll_ctrl_busy(ctrl)) { - /* After 1 sec and and the controller is still busy */ + /* After 1 sec and the controller is still busy */ ctrl_err(ctrl, "Controller is still busy after 1 sec\n"); retval = -EBUSY; goto out; diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index dafdc652fcd0..1d7a7c5b5307 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c @@ -164,13 +164,15 @@ static ssize_t sriov_vf_total_msix_show(struct device *dev, char *buf) { struct pci_dev *pdev = to_pci_dev(dev); + struct pci_driver *pdrv; u32 vf_total_msix = 0; device_lock(dev); - if (!pdev->driver || !pdev->driver->sriov_get_vf_total_msix) + pdrv = to_pci_driver(dev->driver); + if (!pdrv || !pdrv->sriov_get_vf_total_msix) goto unlock; - vf_total_msix = pdev->driver->sriov_get_vf_total_msix(pdev); + vf_total_msix = pdrv->sriov_get_vf_total_msix(pdev); unlock: device_unlock(dev); return sysfs_emit(buf, "%u\n", vf_total_msix); @@ -183,23 +185,24 @@ static ssize_t sriov_vf_msix_count_store(struct device *dev, { struct pci_dev *vf_dev = to_pci_dev(dev); struct pci_dev *pdev = pci_physfn(vf_dev); - int val, ret; + struct pci_driver *pdrv; + int val, ret = 0; - ret = kstrtoint(buf, 0, &val); - if (ret) - return ret; + if (kstrtoint(buf, 0, &val) < 0) + return -EINVAL; if (val < 0) return -EINVAL; device_lock(&pdev->dev); - if (!pdev->driver || !pdev->driver->sriov_set_msix_vec_count) { + pdrv = to_pci_driver(dev->driver); + if (!pdrv || !pdrv->sriov_set_msix_vec_count) { ret = -EOPNOTSUPP; goto err_pdev; } device_lock(&vf_dev->dev); - if (vf_dev->driver) { + if (to_pci_driver(vf_dev->dev.driver)) { /* * A driver is already attached to this VF and has configured * itself based on the current MSI-X vector count. Changing @@ -209,7 +212,7 @@ static ssize_t sriov_vf_msix_count_store(struct device *dev, goto err_dev; } - ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val); + ret = pdrv->sriov_set_msix_vec_count(vf_dev, val); err_dev: device_unlock(&vf_dev->dev); @@ -376,12 +379,12 @@ static ssize_t sriov_numvfs_store(struct device *dev, const char *buf, size_t count) { struct pci_dev *pdev = to_pci_dev(dev); - int ret; + struct pci_driver *pdrv; + int ret = 0; u16 num_vfs; - ret = kstrtou16(buf, 0, &num_vfs); - if (ret < 0) - return ret; + if (kstrtou16(buf, 0, &num_vfs) < 0) + return -EINVAL; if (num_vfs > pci_sriov_get_totalvfs(pdev)) return -ERANGE; @@ -392,14 +395,15 @@ static ssize_t sriov_numvfs_store(struct device *dev, goto exit; /* is PF driver loaded */ - if (!pdev->driver) { + pdrv = to_pci_driver(dev->driver); + if (!pdrv) { pci_info(pdev, "no driver bound to device; cannot configure SR-IOV\n"); ret = -ENOENT; goto exit; } /* is PF driver loaded w/callback */ - if (!pdev->driver->sriov_configure) { + if (!pdrv->sriov_configure) { pci_info(pdev, "driver does not support SR-IOV configuration via sysfs\n"); ret = -ENOENT; goto exit; @@ -407,7 +411,7 @@ static ssize_t sriov_numvfs_store(struct device *dev, if (num_vfs == 0) { /* disable VFs */ - ret = pdev->driver->sriov_configure(pdev, 0); + ret = pdrv->sriov_configure(pdev, 0); goto exit; } @@ -419,7 +423,7 @@ static ssize_t sriov_numvfs_store(struct device *dev, goto exit; } - ret = pdev->driver->sriov_configure(pdev, num_vfs); + ret = pdrv->sriov_configure(pdev, num_vfs); if (ret < 0) goto exit; diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 4b4792940e86..12e296d634eb 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -582,7 +582,8 @@ err: return ret; } -static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries) +static void __iomem *msix_map_region(struct pci_dev *dev, + unsigned int nr_entries) { resource_size_t phys_addr; u32 table_offset; diff --git a/drivers/pci/of.c b/drivers/pci/of.c index d84381ce82b5..0b1237cff239 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c @@ -423,7 +423,7 @@ failed: */ static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq) { - struct device_node *dn, *ppnode; + struct device_node *dn, *ppnode = NULL; struct pci_dev *ppdev; __be32 laddr[3]; u8 pin; @@ -452,8 +452,14 @@ static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args * if (pin == 0) return -ENODEV; + /* Local interrupt-map in the device node? Use it! */ + if (of_get_property(dn, "interrupt-map", NULL)) { + pin = pci_swizzle_interrupt_pin(pdev, pin); + ppnode = dn; + } + /* Now we walk up the PCI tree */ - for (;;) { + while (!ppnode) { /* Get the pci_dev of our parent */ ppdev = pdev->bus->self; diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 50cdde3e9a8b..8d47cb7218d1 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -874,7 +874,7 @@ static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap, int i; for_each_sg(sg, s, nents, i) { - s->dma_address = sg_phys(s) - p2p_pgmap->bus_offset; + s->dma_address = sg_phys(s) + p2p_pgmap->bus_offset; sg_dma_len(s) = s->length; } @@ -943,7 +943,7 @@ EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs); * * Parses an attribute value to decide whether to enable p2pdma. * The value can select a PCI device (using its full BDF device - * name) or a boolean (in any format strtobool() accepts). A false + * name) or a boolean (in any format kstrtobool() accepts). A false * value disables p2pdma, a true value expects the caller * to automatically find a compatible device and specifying a PCI device * expects the caller to use the specific provider. @@ -975,11 +975,11 @@ int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, } else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) { /* * If the user enters a PCI device that doesn't exist - * like "0000:01:00.1", we don't want strtobool to think + * like "0000:01:00.1", we don't want kstrtobool to think * it's a '0' when it's clearly not what the user wanted. * So we require 0's and 1's to be exactly one character. */ - } else if (!strtobool(page, use_p2pdma)) { + } else if (!kstrtobool(page, use_p2pdma)) { return 0; } diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c index fdaf86a888b7..db97cddfc85e 100644 --- a/drivers/pci/pci-bridge-emul.c +++ b/drivers/pci/pci-bridge-emul.c @@ -431,8 +431,21 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, /* Clear the W1C bits */ new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); + /* Save the new value with the cleared W1C bits into the cfgspace */ cfgspace[reg / 4] = cpu_to_le32(new); + /* + * Clear the W1C bits not specified by the write mask, so that the + * write_op() does not clear them. + */ + new &= ~(behavior[reg / 4].w1c & ~mask); + + /* + * Set the W1C bits specified by the write mask, so that write_op() + * knows about that they are to be cleared. + */ + new |= (value << shift) & (behavior[reg / 4].w1c & mask); + if (write_op) write_op(bridge, reg, old, new, mask); diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 2761ab86490d..1d98c974381c 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -319,12 +319,10 @@ static long local_pci_probe(void *_ddi) * its remove routine. */ pm_runtime_get_sync(dev); - pci_dev->driver = pci_drv; rc = pci_drv->probe(pci_dev, ddi->id); if (!rc) return rc; if (rc < 0) { - pci_dev->driver = NULL; pm_runtime_put_sync(dev); return rc; } @@ -390,14 +388,13 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev, * @pci_dev: PCI device being probed * * returns 0 on success, else error. - * side-effect: pci_dev->driver is set to drv when drv claims pci_dev. */ static int __pci_device_probe(struct pci_driver *drv, struct pci_dev *pci_dev) { const struct pci_device_id *id; int error = 0; - if (!pci_dev->driver && drv->probe) { + if (drv->probe) { error = -ENODEV; id = pci_match_device(drv, pci_dev); @@ -457,18 +454,15 @@ static int pci_device_probe(struct device *dev) static void pci_device_remove(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct pci_driver *drv = pci_dev->driver; + struct pci_driver *drv = to_pci_driver(dev->driver); - if (drv) { - if (drv->remove) { - pm_runtime_get_sync(dev); - drv->remove(pci_dev); - pm_runtime_put_noidle(dev); - } - pcibios_free_irq(pci_dev); - pci_dev->driver = NULL; - pci_iov_remove(pci_dev); + if (drv->remove) { + pm_runtime_get_sync(dev); + drv->remove(pci_dev); + pm_runtime_put_noidle(dev); } + pcibios_free_irq(pci_dev); + pci_iov_remove(pci_dev); /* Undo the runtime PM settings in local_pci_probe() */ pm_runtime_put_sync(dev); @@ -495,7 +489,7 @@ static void pci_device_remove(struct device *dev) static void pci_device_shutdown(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct pci_driver *drv = pci_dev->driver; + struct pci_driver *drv = to_pci_driver(dev->driver); pm_runtime_resume(dev); @@ -576,7 +570,7 @@ static int pci_pm_reenable_device(struct pci_dev *pci_dev) { int retval; - /* if the device was enabled before suspend, reenable */ + /* if the device was enabled before suspend, re-enable */ retval = pci_reenable_device(pci_dev); /* * if the device was busmaster before the suspend, make it busmaster @@ -591,7 +585,7 @@ static int pci_pm_reenable_device(struct pci_dev *pci_dev) static int pci_legacy_suspend(struct device *dev, pm_message_t state) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct pci_driver *drv = pci_dev->driver; + struct pci_driver *drv = to_pci_driver(dev->driver); if (drv && drv->suspend) { pci_power_t prev = pci_dev->current_state; @@ -632,7 +626,7 @@ static int pci_legacy_suspend_late(struct device *dev, pm_message_t state) static int pci_legacy_resume(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct pci_driver *drv = pci_dev->driver; + struct pci_driver *drv = to_pci_driver(dev->driver); pci_fixup_device(pci_fixup_resume, pci_dev); @@ -651,7 +645,7 @@ static void pci_pm_default_suspend(struct pci_dev *pci_dev) static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev) { - struct pci_driver *drv = pci_dev->driver; + struct pci_driver *drv = to_pci_driver(pci_dev->dev.driver); bool ret = drv && (drv->suspend || drv->resume); /* @@ -1244,11 +1238,11 @@ static int pci_pm_runtime_suspend(struct device *dev) int error; /* - * If pci_dev->driver is not set (unbound), we leave the device in D0, - * but it may go to D3cold when the bridge above it runtime suspends. - * Save its config space in case that happens. + * If the device has no driver, we leave it in D0, but it may go to + * D3cold when the bridge above it runtime suspends. Save its + * config space in case that happens. */ - if (!pci_dev->driver) { + if (!to_pci_driver(dev->driver)) { pci_save_state(pci_dev); return 0; } @@ -1305,7 +1299,7 @@ static int pci_pm_runtime_resume(struct device *dev) */ pci_restore_standard_config(pci_dev); - if (!pci_dev->driver) + if (!to_pci_driver(dev->driver)) return 0; pci_fixup_device(pci_fixup_resume_early, pci_dev); @@ -1324,14 +1318,13 @@ static int pci_pm_runtime_resume(struct device *dev) static int pci_pm_runtime_idle(struct device *dev) { - struct pci_dev *pci_dev = to_pci_dev(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; /* - * If pci_dev->driver is not set (unbound), the device should - * always remain in D0 regardless of the runtime PM status + * If the device has no driver, it should always remain in D0 + * regardless of the runtime PM status */ - if (!pci_dev->driver) + if (!to_pci_driver(dev->driver)) return 0; if (!pm) @@ -1438,8 +1431,10 @@ static struct pci_driver pci_compat_driver = { */ struct pci_driver *pci_dev_driver(const struct pci_dev *dev) { - if (dev->driver) - return dev->driver; + struct pci_driver *drv = to_pci_driver(dev->dev.driver); + + if (drv) + return drv; else { int i; for (i = 0; i <= PCI_ROM_RESOURCE; i++) @@ -1542,7 +1537,7 @@ static int pci_uevent(struct device *dev, struct kobj_uevent_env *env) return 0; } -#if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) +#if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) /** * pci_uevent_ers - emit a uevent during recovery path of PCI device * @pdev: PCI device undergoing error recovery diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index f807b92afa6c..cfe2f85af09e 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -26,6 +26,7 @@ #include <linux/slab.h> #include <linux/vgaarb.h> #include <linux/pm_runtime.h> +#include <linux/msi.h> #include <linux/of.h> #include "pci.h" @@ -49,7 +50,28 @@ pci_config_attr(subsystem_vendor, "0x%04x\n"); pci_config_attr(subsystem_device, "0x%04x\n"); pci_config_attr(revision, "0x%02x\n"); pci_config_attr(class, "0x%06x\n"); -pci_config_attr(irq, "%u\n"); + +static ssize_t irq_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct pci_dev *pdev = to_pci_dev(dev); + +#ifdef CONFIG_PCI_MSI + /* + * For MSI, show the first MSI IRQ; for all other cases including + * MSI-X, show the legacy INTx IRQ. + */ + if (pdev->msi_enabled) { + struct msi_desc *desc = first_pci_msi_entry(pdev); + + return sysfs_emit(buf, "%u\n", desc->irq); + } +#endif + + return sysfs_emit(buf, "%u\n", pdev->irq); +} +static DEVICE_ATTR_RO(irq); static ssize_t broken_parity_status_show(struct device *dev, struct device_attribute *attr, @@ -275,15 +297,15 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr, { struct pci_dev *pdev = to_pci_dev(dev); unsigned long val; - ssize_t result = kstrtoul(buf, 0, &val); - - if (result < 0) - return result; + ssize_t result = 0; /* this can crash the machine when done on the "wrong" device */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; + if (kstrtoul(buf, 0, &val) < 0) + return -EINVAL; + device_lock(dev); if (dev->driver) result = -EBUSY; @@ -314,14 +336,13 @@ static ssize_t numa_node_store(struct device *dev, size_t count) { struct pci_dev *pdev = to_pci_dev(dev); - int node, ret; + int node; if (!capable(CAP_SYS_ADMIN)) return -EPERM; - ret = kstrtoint(buf, 0, &node); - if (ret) - return ret; + if (kstrtoint(buf, 0, &node) < 0) + return -EINVAL; if ((node < 0 && node != NUMA_NO_NODE) || node >= MAX_NUMNODES) return -EINVAL; @@ -380,12 +401,12 @@ static ssize_t msi_bus_store(struct device *dev, struct device_attribute *attr, struct pci_bus *subordinate = pdev->subordinate; unsigned long val; - if (kstrtoul(buf, 0, &val) < 0) - return -EINVAL; - if (!capable(CAP_SYS_ADMIN)) return -EPERM; + if (kstrtoul(buf, 0, &val) < 0) + return -EINVAL; + /* * "no_msi" and "bus_flags" only affect what happens when a driver * requests MSI or MSI-X. They don't affect any drivers that have @@ -1341,10 +1362,10 @@ static ssize_t reset_store(struct device *dev, struct device_attribute *attr, { struct pci_dev *pdev = to_pci_dev(dev); unsigned long val; - ssize_t result = kstrtoul(buf, 0, &val); + ssize_t result; - if (result < 0) - return result; + if (kstrtoul(buf, 0, &val) < 0) + return -EINVAL; if (val != 1) return -EINVAL; diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 39c65b5f92c1..da75c422ba85 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -269,7 +269,7 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path, const char **endptr) { int ret; - int seg, bus, slot, func; + unsigned int seg, bus, slot, func; char *wpath, *p; char end; @@ -1439,6 +1439,24 @@ static int pci_save_pcie_state(struct pci_dev *dev) return 0; } +void pci_bridge_reconfigure_ltr(struct pci_dev *dev) +{ +#ifdef CONFIG_PCIEASPM + struct pci_dev *bridge; + u32 ctl; + + bridge = pci_upstream_bridge(dev); + if (bridge && bridge->ltr_path) { + pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, &ctl); + if (!(ctl & PCI_EXP_DEVCTL2_LTR_EN)) { + pci_dbg(bridge, "re-enabling LTR\n"); + pcie_capability_set_word(bridge, PCI_EXP_DEVCTL2, + PCI_EXP_DEVCTL2_LTR_EN); + } + } +#endif +} + static void pci_restore_pcie_state(struct pci_dev *dev) { int i = 0; @@ -1449,6 +1467,13 @@ static void pci_restore_pcie_state(struct pci_dev *dev) if (!save_state) return; + /* + * Downstream ports reset the LTR enable bit when link goes down. + * Check and re-configure the bit here before restoring device. + * PCIe r5.0, sec 7.5.3.16. + */ + pci_bridge_reconfigure_ltr(dev); + cap = (u16 *)&save_state->cap.data[0]; pcie_capability_write_word(dev, PCI_EXP_DEVCTL, cap[i++]); pcie_capability_write_word(dev, PCI_EXP_LNKCTL, cap[i++]); @@ -2053,14 +2078,14 @@ void pcim_pin_device(struct pci_dev *pdev) EXPORT_SYMBOL(pcim_pin_device); /* - * pcibios_add_device - provide arch specific hooks when adding device dev + * pcibios_device_add - provide arch specific hooks when adding device dev * @dev: the PCI device being added * * Permits the platform to provide architecture specific functionality when * devices are added. This is the default implementation. Architecture * implementations can override this. */ -int __weak pcibios_add_device(struct pci_dev *dev) +int __weak pcibios_device_add(struct pci_dev *dev) { return 0; } @@ -2180,6 +2205,7 @@ int pci_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state state) } EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state); +#ifdef CONFIG_PCIEAER void pcie_clear_device_status(struct pci_dev *dev) { u16 sta; @@ -2187,6 +2213,7 @@ void pcie_clear_device_status(struct pci_dev *dev) pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &sta); pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta); } +#endif /** * pcie_clear_root_pme_status - Clear root port PME interrupt status. @@ -3697,6 +3724,14 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask) struct pci_dev *bridge; u32 cap, ctl2; + /* + * Per PCIe r5.0, sec 9.3.5.10, the AtomicOp Requester Enable bit + * in Device Control 2 is reserved in VFs and the PF value applies + * to all associated VFs. + */ + if (dev->is_virtfn) + return -EINVAL; + if (!pci_is_pcie(dev)) return -EINVAL; @@ -5068,13 +5103,14 @@ EXPORT_SYMBOL_GPL(pci_dev_unlock); static void pci_dev_save_and_disable(struct pci_dev *dev) { + struct pci_driver *drv = to_pci_driver(dev->dev.driver); const struct pci_error_handlers *err_handler = - dev->driver ? dev->driver->err_handler : NULL; + drv ? drv->err_handler : NULL; /* - * dev->driver->err_handler->reset_prepare() is protected against - * races with ->remove() by the device lock, which must be held by - * the caller. + * drv->err_handler->reset_prepare() is protected against races + * with ->remove() by the device lock, which must be held by the + * caller. */ if (err_handler && err_handler->reset_prepare) err_handler->reset_prepare(dev); @@ -5099,15 +5135,15 @@ static void pci_dev_save_and_disable(struct pci_dev *dev) static void pci_dev_restore(struct pci_dev *dev) { + struct pci_driver *drv = to_pci_driver(dev->dev.driver); const struct pci_error_handlers *err_handler = - dev->driver ? dev->driver->err_handler : NULL; + drv ? drv->err_handler : NULL; pci_restore_state(dev); /* - * dev->driver->err_handler->reset_done() is protected against - * races with ->remove() by the device lock, which must be held by - * the caller. + * drv->err_handler->reset_done() is protected against races with + * ->remove() by the device lock, which must be held by the caller. */ if (err_handler && err_handler->reset_done) err_handler->reset_done(dev); @@ -5268,7 +5304,7 @@ const struct attribute_group pci_dev_reset_method_attr_group = { */ int __pci_reset_function_locked(struct pci_dev *dev) { - int i, m, rc = -ENOTTY; + int i, m, rc; might_sleep(); @@ -6304,11 +6340,12 @@ EXPORT_SYMBOL_GPL(pci_pr3_present); * cannot be left as a userspace activity). DMA aliases should therefore * be configured via quirks, such as the PCI fixup header quirk. */ -void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from, unsigned nr_devfns) +void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from, + unsigned int nr_devfns) { int devfn_to; - nr_devfns = min(nr_devfns, (unsigned) MAX_NR_DEVFNS - devfn_from); + nr_devfns = min(nr_devfns, (unsigned int)MAX_NR_DEVFNS - devfn_from); devfn_to = devfn_from + nr_devfns - 1; if (!dev->dma_alias_mask) diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index bd39098c57da..3d60cabde1a1 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -86,6 +86,7 @@ void pci_msix_init(struct pci_dev *dev); bool pci_bridge_d3_possible(struct pci_dev *dev); void pci_bridge_d3_update(struct pci_dev *dev); void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev); +void pci_bridge_reconfigure_ltr(struct pci_dev *dev); static inline void pci_wakeup_event(struct pci_dev *dev) { diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile index b2980db88cc0..5783a2f79e6a 100644 --- a/drivers/pci/pcie/Makefile +++ b/drivers/pci/pcie/Makefile @@ -2,12 +2,12 @@ # # Makefile for PCI Express features and port driver -pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o rcec.o +pcieportdrv-y := portdrv_core.o portdrv_pci.o rcec.o obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o obj-$(CONFIG_PCIEASPM) += aspm.o -obj-$(CONFIG_PCIEAER) += aer.o +obj-$(CONFIG_PCIEAER) += aer.o err.o obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o obj-$(CONFIG_PCIE_PME) += pme.o obj-$(CONFIG_PCIE_DPC) += dpc.o diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index 9784fdcf3006..9fa1f97e5b27 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c @@ -57,7 +57,7 @@ struct aer_stats { * "as seen by this device". Note that this may mean that if an * end point is causing problems, the AER counters may increment * at its link partner (e.g. root port) because the errors will be - * "seen" by the link partner and not the the problematic end point + * "seen" by the link partner and not the problematic end point * itself (which may report all counters as 0 as it never saw any * problems). */ diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 013a47f587ce..52c74682601a 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -1219,7 +1219,7 @@ static ssize_t aspm_attr_store_common(struct device *dev, struct pcie_link_state *link = pcie_aspm_get_link(pdev); bool state_enable; - if (strtobool(buf, &state_enable) < 0) + if (kstrtobool(buf, &state_enable) < 0) return -EINVAL; down_read(&pci_bus_sem); @@ -1276,7 +1276,7 @@ static ssize_t clkpm_store(struct device *dev, struct pcie_link_state *link = pcie_aspm_get_link(pdev); bool state_enable; - if (strtobool(buf, &state_enable) < 0) + if (kstrtobool(buf, &state_enable) < 0) return -EINVAL; down_read(&pci_bus_sem); diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c index b576aa890c76..356b9317297e 100644 --- a/drivers/pci/pcie/err.c +++ b/drivers/pci/pcie/err.c @@ -49,14 +49,16 @@ static int report_error_detected(struct pci_dev *dev, pci_channel_state_t state, enum pci_ers_result *result) { + struct pci_driver *pdrv; pci_ers_result_t vote; const struct pci_error_handlers *err_handler; device_lock(&dev->dev); + pdrv = to_pci_driver(dev->dev.driver); if (!pci_dev_set_io_state(dev, state) || - !dev->driver || - !dev->driver->err_handler || - !dev->driver->err_handler->error_detected) { + !pdrv || + !pdrv->err_handler || + !pdrv->err_handler->error_detected) { /* * If any device in the subtree does not have an error_detected * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent @@ -70,7 +72,7 @@ static int report_error_detected(struct pci_dev *dev, vote = PCI_ERS_RESULT_NONE; } } else { - err_handler = dev->driver->err_handler; + err_handler = pdrv->err_handler; vote = err_handler->error_detected(dev, state); } pci_uevent_ers(dev, vote); @@ -91,16 +93,18 @@ static int report_normal_detected(struct pci_dev *dev, void *data) static int report_mmio_enabled(struct pci_dev *dev, void *data) { + struct pci_driver *pdrv; pci_ers_result_t vote, *result = data; const struct pci_error_handlers *err_handler; device_lock(&dev->dev); - if (!dev->driver || - !dev->driver->err_handler || - !dev->driver->err_handler->mmio_enabled) + pdrv = to_pci_driver(dev->dev.driver); + if (!pdrv || + !pdrv->err_handler || + !pdrv->err_handler->mmio_enabled) goto out; - err_handler = dev->driver->err_handler; + err_handler = pdrv->err_handler; vote = err_handler->mmio_enabled(dev); *result = merge_result(*result, vote); out: @@ -110,16 +114,18 @@ out: static int report_slot_reset(struct pci_dev *dev, void *data) { + struct pci_driver *pdrv; pci_ers_result_t vote, *result = data; const struct pci_error_handlers *err_handler; device_lock(&dev->dev); - if (!dev->driver || - !dev->driver->err_handler || - !dev->driver->err_handler->slot_reset) + pdrv = to_pci_driver(dev->dev.driver); + if (!pdrv || + !pdrv->err_handler || + !pdrv->err_handler->slot_reset) goto out; - err_handler = dev->driver->err_handler; + err_handler = pdrv->err_handler; vote = err_handler->slot_reset(dev); *result = merge_result(*result, vote); out: @@ -129,16 +135,18 @@ out: static int report_resume(struct pci_dev *dev, void *data) { + struct pci_driver *pdrv; const struct pci_error_handlers *err_handler; device_lock(&dev->dev); + pdrv = to_pci_driver(dev->dev.driver); if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || - !dev->driver || - !dev->driver->err_handler || - !dev->driver->err_handler->resume) + !pdrv || + !pdrv->err_handler || + !pdrv->err_handler->resume) goto out; - err_handler = dev->driver->err_handler; + err_handler = pdrv->err_handler; err_handler->resume(dev); out: pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index 2ff5724b8f13..0ef4bf5f811d 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h @@ -85,8 +85,7 @@ struct pcie_port_service_driver { int (*runtime_suspend)(struct pcie_device *dev); int (*runtime_resume)(struct pcie_device *dev); - /* Device driver may resume normal operations */ - void (*error_resume)(struct pci_dev *dev); + int (*slot_reset)(struct pcie_device *dev); int port_type; /* Type of the port this driver can handle */ u32 service; /* Port service this device represents */ @@ -110,6 +109,7 @@ void pcie_port_service_unregister(struct pcie_port_service_driver *new); extern struct bus_type pcie_port_bus_type; int pcie_port_device_register(struct pci_dev *dev); +int pcie_port_device_iter(struct device *dev, void *data); #ifdef CONFIG_PM int pcie_port_device_suspend(struct device *dev); int pcie_port_device_resume_noirq(struct device *dev); @@ -118,8 +118,6 @@ int pcie_port_device_runtime_suspend(struct device *dev); int pcie_port_device_runtime_resume(struct device *dev); #endif void pcie_port_device_remove(struct pci_dev *dev); -int __must_check pcie_port_bus_register(void); -void pcie_port_bus_unregister(void); struct pci_dev; diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index 3ee63968deaa..bda630889f95 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c @@ -166,9 +166,6 @@ static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask) { int ret, i; - for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) - irqs[i] = -1; - /* * If we support PME but can't use MSI/MSI-X for it, we have to * fall back to INTx or other interrupts, e.g., a system shared @@ -317,8 +314,10 @@ static int pcie_device_init(struct pci_dev *pdev, int service, int irq) */ int pcie_port_device_register(struct pci_dev *dev) { - int status, capabilities, i, nr_service; - int irqs[PCIE_PORT_DEVICE_MAXSERVICES]; + int status, capabilities, irq_services, i, nr_service; + int irqs[PCIE_PORT_DEVICE_MAXSERVICES] = { + [0 ... PCIE_PORT_DEVICE_MAXSERVICES-1] = -1 + }; /* Enable PCI Express port device */ status = pci_enable_device(dev); @@ -331,18 +330,32 @@ int pcie_port_device_register(struct pci_dev *dev) return 0; pci_set_master(dev); - /* - * Initialize service irqs. Don't use service devices that - * require interrupts if there is no way to generate them. - * However, some drivers may have a polling mode (e.g. pciehp_poll_mode) - * that can be used in the absence of irqs. Allow them to determine - * if that is to be used. - */ - status = pcie_init_service_irqs(dev, irqs, capabilities); - if (status) { - capabilities &= PCIE_PORT_SERVICE_HP; - if (!capabilities) - goto error_disable; + + irq_services = 0; + if (IS_ENABLED(CONFIG_PCIE_PME)) + irq_services |= PCIE_PORT_SERVICE_PME; + if (IS_ENABLED(CONFIG_PCIEAER)) + irq_services |= PCIE_PORT_SERVICE_AER; + if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) + irq_services |= PCIE_PORT_SERVICE_HP; + if (IS_ENABLED(CONFIG_PCIE_DPC)) + irq_services |= PCIE_PORT_SERVICE_DPC; + irq_services &= capabilities; + + if (irq_services) { + /* + * Initialize service IRQs. Don't use service devices that + * require interrupts if there is no way to generate them. + * However, some drivers may have a polling mode (e.g. + * pciehp_poll_mode) that can be used in the absence of IRQs. + * Allow them to determine if that is to be used. + */ + status = pcie_init_service_irqs(dev, irqs, irq_services); + if (status) { + irq_services &= PCIE_PORT_SERVICE_HP; + if (!irq_services) + goto error_disable; + } } /* Allocate child services if any */ @@ -367,24 +380,24 @@ error_disable: return status; } -#ifdef CONFIG_PM -typedef int (*pcie_pm_callback_t)(struct pcie_device *); +typedef int (*pcie_callback_t)(struct pcie_device *); -static int pm_iter(struct device *dev, void *data) +int pcie_port_device_iter(struct device *dev, void *data) { struct pcie_port_service_driver *service_driver; size_t offset = *(size_t *)data; - pcie_pm_callback_t cb; + pcie_callback_t cb; if ((dev->bus == &pcie_port_bus_type) && dev->driver) { service_driver = to_service_driver(dev->driver); - cb = *(pcie_pm_callback_t *)((void *)service_driver + offset); + cb = *(pcie_callback_t *)((void *)service_driver + offset); if (cb) return cb(to_pcie_device(dev)); } return 0; } +#ifdef CONFIG_PM /** * pcie_port_device_suspend - suspend port services associated with a PCIe port * @dev: PCI Express port to handle @@ -392,13 +405,13 @@ static int pm_iter(struct device *dev, void *data) int pcie_port_device_suspend(struct device *dev) { size_t off = offsetof(struct pcie_port_service_driver, suspend); - return device_for_each_child(dev, &off, pm_iter); + return device_for_each_child(dev, &off, pcie_port_device_iter); } int pcie_port_device_resume_noirq(struct device *dev) { size_t off = offsetof(struct pcie_port_service_driver, resume_noirq); - return device_for_each_child(dev, &off, pm_iter); + return device_for_each_child(dev, &off, pcie_port_device_iter); } /** @@ -408,7 +421,7 @@ int pcie_port_device_resume_noirq(struct device *dev) int pcie_port_device_resume(struct device *dev) { size_t off = offsetof(struct pcie_port_service_driver, resume); - return device_for_each_child(dev, &off, pm_iter); + return device_for_each_child(dev, &off, pcie_port_device_iter); } /** @@ -418,7 +431,7 @@ int pcie_port_device_resume(struct device *dev) int pcie_port_device_runtime_suspend(struct device *dev) { size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend); - return device_for_each_child(dev, &off, pm_iter); + return device_for_each_child(dev, &off, pcie_port_device_iter); } /** @@ -428,7 +441,7 @@ int pcie_port_device_runtime_suspend(struct device *dev) int pcie_port_device_runtime_resume(struct device *dev) { size_t off = offsetof(struct pcie_port_service_driver, runtime_resume); - return device_for_each_child(dev, &off, pm_iter); + return device_for_each_child(dev, &off, pcie_port_device_iter); } #endif /* PM */ diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index c7ff1eea225a..35eca6277a96 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c @@ -160,6 +160,9 @@ static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev, static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) { + size_t off = offsetof(struct pcie_port_service_driver, slot_reset); + device_for_each_child(&dev->dev, &off, pcie_port_device_iter); + pci_restore_state(dev); pci_save_state(dev); return PCI_ERS_RESULT_RECOVERED; @@ -170,29 +173,6 @@ static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) return PCI_ERS_RESULT_RECOVERED; } -static int resume_iter(struct device *device, void *data) -{ - struct pcie_device *pcie_device; - struct pcie_port_service_driver *driver; - - if (device->bus == &pcie_port_bus_type && device->driver) { - driver = to_service_driver(device->driver); - if (driver && driver->error_resume) { - pcie_device = to_pcie_device(device); - - /* Forward error message to service drivers */ - driver->error_resume(pcie_device->port); - } - } - - return 0; -} - -static void pcie_portdrv_err_resume(struct pci_dev *dev) -{ - device_for_each_child(&dev->dev, NULL, resume_iter); -} - /* * LINUX Device Driver Model */ @@ -210,7 +190,6 @@ static const struct pci_error_handlers pcie_portdrv_err_handler = { .error_detected = pcie_portdrv_error_detected, .slot_reset = pcie_portdrv_slot_reset, .mmio_enabled = pcie_portdrv_mmio_enabled, - .resume = pcie_portdrv_err_resume, }; static struct pci_driver pcie_portdriver = { diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index d9fc02a71baa..087d3658f75c 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -883,11 +883,11 @@ static void pci_set_bus_msi_domain(struct pci_bus *bus) static int pci_register_host_bridge(struct pci_host_bridge *bridge) { struct device *parent = bridge->dev.parent; - struct resource_entry *window, *n; + struct resource_entry *window, *next, *n; struct pci_bus *bus, *b; - resource_size_t offset; + resource_size_t offset, next_offset; LIST_HEAD(resources); - struct resource *res; + struct resource *res, *next_res; char addr[64], *fmt; const char *name; int err; @@ -970,11 +970,34 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE) dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n"); + /* Coalesce contiguous windows */ + resource_list_for_each_entry_safe(window, n, &resources) { + if (list_is_last(&window->node, &resources)) + break; + + next = list_next_entry(window, node); + offset = window->offset; + res = window->res; + next_offset = next->offset; + next_res = next->res; + + if (res->flags != next_res->flags || offset != next_offset) + continue; + + if (res->end + 1 == next_res->start) { + next_res->start = res->start; + res->flags = res->start = res->end = 0; + } + } + /* Add initial resources to the bus */ resource_list_for_each_entry_safe(window, n, &resources) { - list_move_tail(&window->node, &bridge->windows); offset = window->offset; res = window->res; + if (!res->end) + continue; + + list_move_tail(&window->node, &bridge->windows); if (res->flags & IORESOURCE_BUS) pci_bus_insert_busn_res(bus, bus->number, res->end); @@ -2168,9 +2191,21 @@ static void pci_configure_ltr(struct pci_dev *dev) * Complex and all intermediate Switches indicate support for LTR. * PCIe r4.0, sec 6.18. */ - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || - ((bridge = pci_upstream_bridge(dev)) && - bridge->ltr_path)) { + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { + pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, + PCI_EXP_DEVCTL2_LTR_EN); + dev->ltr_path = 1; + return; + } + + /* + * If we're configuring a hot-added device, LTR was likely + * disabled in the upstream bridge, so re-enable it before enabling + * it in the new device. + */ + bridge = pci_upstream_bridge(dev); + if (bridge && bridge->ltr_path) { + pci_bridge_reconfigure_ltr(dev); pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, PCI_EXP_DEVCTL2_LTR_EN); dev->ltr_path = 1; @@ -2450,7 +2485,7 @@ static struct irq_domain *pci_dev_msi_domain(struct pci_dev *dev) struct irq_domain *d; /* - * If a domain has been set through the pcibios_add_device() + * If a domain has been set through the pcibios_device_add() * callback, then this is the one (platform code knows best). */ d = dev_get_msi_domain(&dev->dev); @@ -2518,7 +2553,7 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus) list_add_tail(&dev->bus_list, &bus->devices); up_write(&pci_bus_sem); - ret = pcibios_add_device(dev); + ret = pcibios_device_add(dev); WARN_ON(ret < 0); /* Set up MSI IRQ domain */ @@ -2550,11 +2585,12 @@ struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn) } EXPORT_SYMBOL(pci_scan_single_device); -static unsigned next_fn(struct pci_bus *bus, struct pci_dev *dev, unsigned fn) +static unsigned int next_fn(struct pci_bus *bus, struct pci_dev *dev, + unsigned int fn) { int pos; u16 cap = 0; - unsigned next_fn; + unsigned int next_fn; if (pci_ari_enabled(bus)) { if (!dev) @@ -2613,7 +2649,7 @@ static int only_one_child(struct pci_bus *bus) */ int pci_scan_slot(struct pci_bus *bus, int devfn) { - unsigned fn, nr = 0; + unsigned int fn, nr = 0; struct pci_dev *dev; if (only_one_child(bus) && (devfn > 0)) diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 4537d1ea14fd..aedb78c86ddc 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -501,7 +501,7 @@ static void quirk_s3_64M(struct pci_dev *dev) DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M); -static void quirk_io(struct pci_dev *dev, int pos, unsigned size, +static void quirk_io(struct pci_dev *dev, int pos, unsigned int size, const char *name) { u32 region; @@ -552,7 +552,7 @@ static void quirk_cs5536_vsa(struct pci_dev *dev) DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_ISA, quirk_cs5536_vsa); static void quirk_io_region(struct pci_dev *dev, int port, - unsigned size, int nr, const char *name) + unsigned int size, int nr, const char *name) { u16 region; struct pci_bus_region bus_region; @@ -666,7 +666,7 @@ static void piix4_io_quirk(struct pci_dev *dev, const char *name, unsigned int p base = devres & 0xffff; size = 16; for (;;) { - unsigned bit = size >> 1; + unsigned int bit = size >> 1; if ((bit & mask) == bit) break; size = bit; @@ -692,7 +692,7 @@ static void piix4_mem_quirk(struct pci_dev *dev, const char *name, unsigned int mask = (devres & 0x3f) << 16; size = 128 << 16; for (;;) { - unsigned bit = size >> 1; + unsigned int bit = size >> 1; if ((bit & mask) == bit) break; size = bit; @@ -806,7 +806,7 @@ static void ich6_lpc_acpi_gpio(struct pci_dev *dev) "ICH6 GPIO"); } -static void ich6_lpc_generic_decode(struct pci_dev *dev, unsigned reg, +static void ich6_lpc_generic_decode(struct pci_dev *dev, unsigned int reg, const char *name, int dynsize) { u32 val; @@ -850,7 +850,7 @@ static void quirk_ich6_lpc(struct pci_dev *dev) DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc); -static void ich7_lpc_generic_decode(struct pci_dev *dev, unsigned reg, +static void ich7_lpc_generic_decode(struct pci_dev *dev, unsigned int reg, const char *name) { u32 val; @@ -2700,7 +2700,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, * then the device can't use INTx interrupts. Tegra's PCIe root ports don't * generate MSI interrupts for PME and AER events instead only INTx interrupts * are generated. Though Tegra's PCIe root ports can generate MSI interrupts - * for other events, since PCIe specificiation doesn't support using a mix of + * for other events, since PCIe specification doesn't support using a mix of * INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port * service drivers registering their respective ISRs for MSIs. */ @@ -3612,6 +3612,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003e, quirk_no_bus_reset); /* * Root port on some Cavium CN8xxx chips do not successfully complete a bus @@ -5795,3 +5796,58 @@ static void apex_pci_fixup_class(struct pci_dev *pdev) } DECLARE_PCI_FIXUP_CLASS_HEADER(0x1ac1, 0x089a, PCI_CLASS_NOT_DEFINED, 8, apex_pci_fixup_class); + +/* + * Pericom PI7C9X2G404/PI7C9X2G304/PI7C9X2G303 switch erratum E5 - + * ACS P2P Request Redirect is not functional + * + * When ACS P2P Request Redirect is enabled and bandwidth is not balanced + * between upstream and downstream ports, packets are queued in an internal + * buffer until CPLD packet. The workaround is to use the switch in store and + * forward mode. + */ +#define PI7C9X2Gxxx_MODE_REG 0x74 +#define PI7C9X2Gxxx_STORE_FORWARD_MODE BIT(0) +static void pci_fixup_pericom_acs_store_forward(struct pci_dev *pdev) +{ + struct pci_dev *upstream; + u16 val; + + /* Downstream ports only */ + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) + return; + + /* Check for ACS P2P Request Redirect use */ + if (!pdev->acs_cap) + return; + pci_read_config_word(pdev, pdev->acs_cap + PCI_ACS_CTRL, &val); + if (!(val & PCI_ACS_RR)) + return; + + upstream = pci_upstream_bridge(pdev); + if (!upstream) + return; + + pci_read_config_word(upstream, PI7C9X2Gxxx_MODE_REG, &val); + if (!(val & PI7C9X2Gxxx_STORE_FORWARD_MODE)) { + pci_info(upstream, "Setting PI7C9X2Gxxx store-forward mode to avoid ACS erratum\n"); + pci_write_config_word(upstream, PI7C9X2Gxxx_MODE_REG, val | + PI7C9X2Gxxx_STORE_FORWARD_MODE); + } +} +/* + * Apply fixup on enable and on resume, in order to apply the fix up whenever + * ACS configuration changes or switch mode is reset + */ +DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2404, + pci_fixup_pericom_acs_store_forward); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2404, + pci_fixup_pericom_acs_store_forward); +DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2304, + pci_fixup_pericom_acs_store_forward); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2304, + pci_fixup_pericom_acs_store_forward); +DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2303, + pci_fixup_pericom_acs_store_forward); +DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2303, + pci_fixup_pericom_acs_store_forward); diff --git a/drivers/pci/rom.c b/drivers/pci/rom.c index 8fc9a4e911e3..e18d3a4383ba 100644 --- a/drivers/pci/rom.c +++ b/drivers/pci/rom.c @@ -85,7 +85,7 @@ static size_t pci_get_rom_size(struct pci_dev *pdev, void __iomem *rom, { void __iomem *image; int last_image; - unsigned length; + unsigned int length; image = rom; do { diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index 2ce636937c6e..547396ec50b5 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -1525,7 +1525,7 @@ static void pci_bridge_release_resources(struct pci_bus *bus, { struct pci_dev *dev = bus->self; struct resource *r; - unsigned old_flags = 0; + unsigned int old_flags = 0; struct resource *b_res; int idx = 1; diff --git a/drivers/pci/setup-irq.c b/drivers/pci/setup-irq.c index 7129494754dd..cc7d26b015f3 100644 --- a/drivers/pci/setup-irq.c +++ b/drivers/pci/setup-irq.c @@ -8,7 +8,6 @@ * David Miller (davem@redhat.com) */ - #include <linux/kernel.h> #include <linux/pci.h> #include <linux/errno.h> @@ -28,25 +27,26 @@ void pci_assign_irq(struct pci_dev *dev) return; } - /* If this device is not on the primary bus, we need to figure out - which interrupt pin it will come in on. We know which slot it - will come in on 'cos that slot is where the bridge is. Each - time the interrupt line passes through a PCI-PCI bridge we must - apply the swizzle function. */ - + /* + * If this device is not on the primary bus, we need to figure out + * which interrupt pin it will come in on. We know which slot it + * will come in on because that slot is where the bridge is. Each + * time the interrupt line passes through a PCI-PCI bridge we must + * apply the swizzle function. + */ pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin); /* Cope with illegal. */ if (pin > 4) pin = 1; if (pin) { - /* Follow the chain of bridges, swizzling as we go. */ + /* Follow the chain of bridges, swizzling as we go. */ if (hbrg->swizzle_irq) slot = (*(hbrg->swizzle_irq))(dev, &pin); /* - * If a swizzling function is not used map_irq must - * ignore slot + * If a swizzling function is not used, map_irq() must + * ignore slot. */ irq = (*(hbrg->map_irq))(dev, slot, pin); if (irq == -1) @@ -56,7 +56,9 @@ void pci_assign_irq(struct pci_dev *dev) pci_dbg(dev, "assign IRQ: got %d\n", dev->irq); - /* Always tell the device, so the driver knows what is - the real IRQ to use; the device does not use it. */ + /* + * Always tell the device, so the driver knows what is the real IRQ + * to use; the device does not use it. + */ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); } diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c index 0b301f8be9ed..38c2b036fb8e 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -45,6 +45,7 @@ enum mrpc_state { MRPC_QUEUED, MRPC_RUNNING, MRPC_DONE, + MRPC_IO_ERROR, }; struct switchtec_user { @@ -66,6 +67,19 @@ struct switchtec_user { int event_cnt; }; +/* + * The MMIO reads to the device_id register should always return the device ID + * of the device, otherwise the firmware is probably stuck or unreachable + * due to a firmware reset which clears PCI state including the BARs and Memory + * Space Enable bits. + */ +static int is_firmware_running(struct switchtec_dev *stdev) +{ + u32 device = ioread32(&stdev->mmio_sys_info->device_id); + + return stdev->pdev->device == device; +} + static struct switchtec_user *stuser_create(struct switchtec_dev *stdev) { struct switchtec_user *stuser; @@ -113,6 +127,7 @@ static void stuser_set_state(struct switchtec_user *stuser, [MRPC_QUEUED] = "QUEUED", [MRPC_RUNNING] = "RUNNING", [MRPC_DONE] = "DONE", + [MRPC_IO_ERROR] = "IO_ERROR", }; stuser->state = state; @@ -184,9 +199,26 @@ static int mrpc_queue_cmd(struct switchtec_user *stuser) return 0; } +static void mrpc_cleanup_cmd(struct switchtec_dev *stdev) +{ + /* requires the mrpc_mutex to already be held when called */ + + struct switchtec_user *stuser = list_entry(stdev->mrpc_queue.next, + struct switchtec_user, list); + + stuser->cmd_done = true; + wake_up_interruptible(&stuser->cmd_comp); + list_del_init(&stuser->list); + stuser_put(stuser); + stdev->mrpc_busy = 0; + + mrpc_cmd_submit(stdev); +} + static void mrpc_complete_cmd(struct switchtec_dev *stdev) { /* requires the mrpc_mutex to already be held when called */ + struct switchtec_user *stuser; if (list_empty(&stdev->mrpc_queue)) @@ -206,7 +238,8 @@ static void mrpc_complete_cmd(struct switchtec_dev *stdev) stuser_set_state(stuser, MRPC_DONE); stuser->return_code = 0; - if (stuser->status != SWITCHTEC_MRPC_STATUS_DONE) + if (stuser->status != SWITCHTEC_MRPC_STATUS_DONE && + stuser->status != SWITCHTEC_MRPC_STATUS_ERROR) goto out; if (stdev->dma_mrpc) @@ -223,13 +256,7 @@ static void mrpc_complete_cmd(struct switchtec_dev *stdev) memcpy_fromio(stuser->data, &stdev->mmio_mrpc->output_data, stuser->read_len); out: - stuser->cmd_done = true; - wake_up_interruptible(&stuser->cmd_comp); - list_del_init(&stuser->list); - stuser_put(stuser); - stdev->mrpc_busy = 0; - - mrpc_cmd_submit(stdev); + mrpc_cleanup_cmd(stdev); } static void mrpc_event_work(struct work_struct *work) @@ -246,6 +273,23 @@ static void mrpc_event_work(struct work_struct *work) mutex_unlock(&stdev->mrpc_mutex); } +static void mrpc_error_complete_cmd(struct switchtec_dev *stdev) +{ + /* requires the mrpc_mutex to already be held when called */ + + struct switchtec_user *stuser; + + if (list_empty(&stdev->mrpc_queue)) + return; + + stuser = list_entry(stdev->mrpc_queue.next, + struct switchtec_user, list); + + stuser_set_state(stuser, MRPC_IO_ERROR); + + mrpc_cleanup_cmd(stdev); +} + static void mrpc_timeout_work(struct work_struct *work) { struct switchtec_dev *stdev; @@ -257,6 +301,11 @@ static void mrpc_timeout_work(struct work_struct *work) mutex_lock(&stdev->mrpc_mutex); + if (!is_firmware_running(stdev)) { + mrpc_error_complete_cmd(stdev); + goto out; + } + if (stdev->dma_mrpc) status = stdev->dma_mrpc->status; else @@ -327,7 +376,7 @@ static ssize_t field ## _show(struct device *dev, \ return io_string_show(buf, &si->gen4.field, \ sizeof(si->gen4.field)); \ else \ - return -ENOTSUPP; \ + return -EOPNOTSUPP; \ } \ \ static DEVICE_ATTR_RO(field) @@ -544,6 +593,11 @@ static ssize_t switchtec_dev_read(struct file *filp, char __user *data, if (rc) return rc; + if (stuser->state == MRPC_IO_ERROR) { + mutex_unlock(&stdev->mrpc_mutex); + return -EIO; + } + if (stuser->state != MRPC_DONE) { mutex_unlock(&stdev->mrpc_mutex); return -EBADE; @@ -569,7 +623,8 @@ static ssize_t switchtec_dev_read(struct file *filp, char __user *data, out: mutex_unlock(&stdev->mrpc_mutex); - if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE) + if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE || + stuser->status == SWITCHTEC_MRPC_STATUS_ERROR) return size; else if (stuser->status == SWITCHTEC_MRPC_STATUS_INTERRUPTED) return -ENXIO; @@ -613,7 +668,7 @@ static int ioctl_flash_info(struct switchtec_dev *stdev, info.flash_length = ioread32(&fi->gen4.flash_length); info.num_partitions = SWITCHTEC_NUM_PARTITIONS_GEN4; } else { - return -ENOTSUPP; + return -EOPNOTSUPP; } if (copy_to_user(uinfo, &info, sizeof(info))) @@ -821,7 +876,7 @@ static int ioctl_flash_part_info(struct switchtec_dev *stdev, if (ret) return ret; } else { - return -ENOTSUPP; + return -EOPNOTSUPP; } if (copy_to_user(uinfo, &info, sizeof(info))) @@ -969,6 +1024,9 @@ static int event_ctl(struct switchtec_dev *stdev, return PTR_ERR(reg); hdr = ioread32(reg); + if (hdr & SWITCHTEC_EVENT_NOT_SUPP) + return -EOPNOTSUPP; + for (i = 0; i < ARRAY_SIZE(ctl->data); i++) ctl->data[i] = ioread32(®[i + 1]); @@ -1041,7 +1099,7 @@ static int ioctl_event_ctl(struct switchtec_dev *stdev, for (ctl.index = 0; ctl.index < nr_idxs; ctl.index++) { ctl.flags = event_flags; ret = event_ctl(stdev, &ctl); - if (ret < 0) + if (ret < 0 && ret != -EOPNOTSUPP) return ret; } } else { @@ -1078,7 +1136,7 @@ static int ioctl_pff_to_port(struct switchtec_dev *stdev, break; } - reg = ioread32(&pcfg->vep_pff_inst_id); + reg = ioread32(&pcfg->vep_pff_inst_id) & 0xFF; if (reg == p.pff) { p.port = SWITCHTEC_IOCTL_PFF_VEP; break; @@ -1124,7 +1182,7 @@ static int ioctl_port_to_pff(struct switchtec_dev *stdev, p.pff = ioread32(&pcfg->usp_pff_inst_id); break; case SWITCHTEC_IOCTL_PFF_VEP: - p.pff = ioread32(&pcfg->vep_pff_inst_id); + p.pff = ioread32(&pcfg->vep_pff_inst_id) & 0xFF; break; default: if (p.port > ARRAY_SIZE(pcfg->dsp_pff_inst_id)) @@ -1348,6 +1406,9 @@ static int mask_event(struct switchtec_dev *stdev, int eid, int idx) hdr_reg = event_regs[eid].map_reg(stdev, off, idx); hdr = ioread32(hdr_reg); + if (hdr & SWITCHTEC_EVENT_NOT_SUPP) + return 0; + if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ)) return 0; @@ -1498,7 +1559,7 @@ static void init_pff(struct switchtec_dev *stdev) if (reg < stdev->pff_csr_count) stdev->pff_local[reg] = 1; - reg = ioread32(&pcfg->vep_pff_inst_id); + reg = ioread32(&pcfg->vep_pff_inst_id) & 0xFF; if (reg < stdev->pff_csr_count) stdev->pff_local[reg] = 1; @@ -1556,7 +1617,7 @@ static int switchtec_init_pci(struct switchtec_dev *stdev, else if (stdev->gen == SWITCHTEC_GEN4) part_id = &stdev->mmio_sys_info->gen4.partition_id; else - return -ENOTSUPP; + return -EOPNOTSUPP; stdev->partition = ioread8(part_id); stdev->partition_count = ioread8(&stdev->mmio_ntb->partition_count); diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c index 4be24890132e..a4fc4d0690fe 100644 --- a/drivers/pci/vpd.c +++ b/drivers/pci/vpd.c @@ -57,10 +57,7 @@ static size_t pci_vpd_size(struct pci_dev *dev) size_t off = 0, size; unsigned char tag, header[1+2]; /* 1 byte tag, 2 bytes length */ - /* Otherwise the following reads would fail. */ - dev->vpd.len = PCI_VPD_MAX_SIZE; - - while (pci_read_vpd(dev, off, 1, header) == 1) { + while (pci_read_vpd_any(dev, off, 1, header) == 1) { size = 0; if (off == 0 && (header[0] == 0x00 || header[0] == 0xff)) @@ -68,7 +65,7 @@ static size_t pci_vpd_size(struct pci_dev *dev) if (header[0] & PCI_VPD_LRDT) { /* Large Resource Data Type Tag */ - if (pci_read_vpd(dev, off + 1, 2, &header[1]) != 2) { + if (pci_read_vpd_any(dev, off + 1, 2, &header[1]) != 2) { pci_warn(dev, "failed VPD read at offset %zu\n", off + 1); return off ?: PCI_VPD_SZ_INVALID; @@ -99,14 +96,14 @@ error: return off ?: PCI_VPD_SZ_INVALID; } -static bool pci_vpd_available(struct pci_dev *dev) +static bool pci_vpd_available(struct pci_dev *dev, bool check_size) { struct pci_vpd *vpd = &dev->vpd; if (!vpd->cap) return false; - if (vpd->len == 0) { + if (vpd->len == 0 && check_size) { vpd->len = pci_vpd_size(dev); if (vpd->len == PCI_VPD_SZ_INVALID) { vpd->cap = 0; @@ -156,24 +153,27 @@ static int pci_vpd_wait(struct pci_dev *dev, bool set) } static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, - void *arg) + void *arg, bool check_size) { struct pci_vpd *vpd = &dev->vpd; + unsigned int max_len; int ret = 0; loff_t end = pos + count; u8 *buf = arg; - if (!pci_vpd_available(dev)) + if (!pci_vpd_available(dev, check_size)) return -ENODEV; if (pos < 0) return -EINVAL; - if (pos > vpd->len) + max_len = check_size ? vpd->len : PCI_VPD_MAX_SIZE; + + if (pos >= max_len) return 0; - if (end > vpd->len) { - end = vpd->len; + if (end > max_len) { + end = max_len; count = end - pos; } @@ -217,20 +217,23 @@ static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, } static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count, - const void *arg) + const void *arg, bool check_size) { struct pci_vpd *vpd = &dev->vpd; + unsigned int max_len; const u8 *buf = arg; loff_t end = pos + count; int ret = 0; - if (!pci_vpd_available(dev)) + if (!pci_vpd_available(dev, check_size)) return -ENODEV; if (pos < 0 || (pos & 3) || (count & 3)) return -EINVAL; - if (end > vpd->len) + max_len = check_size ? vpd->len : PCI_VPD_MAX_SIZE; + + if (end > max_len) return -EINVAL; if (mutex_lock_killable(&vpd->lock)) @@ -313,7 +316,7 @@ void *pci_vpd_alloc(struct pci_dev *dev, unsigned int *size) void *buf; int cnt; - if (!pci_vpd_available(dev)) + if (!pci_vpd_available(dev, true)) return ERR_PTR(-ENODEV); len = dev->vpd.len; @@ -381,6 +384,24 @@ static int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, return -ENOENT; } +static ssize_t __pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf, + bool check_size) +{ + ssize_t ret; + + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { + dev = pci_get_func0_dev(dev); + if (!dev) + return -ENODEV; + + ret = pci_vpd_read(dev, pos, count, buf, check_size); + pci_dev_put(dev); + return ret; + } + + return pci_vpd_read(dev, pos, count, buf, check_size); +} + /** * pci_read_vpd - Read one entry from Vital Product Data * @dev: PCI device struct @@ -390,6 +411,20 @@ static int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, */ ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) { + return __pci_read_vpd(dev, pos, count, buf, true); +} +EXPORT_SYMBOL(pci_read_vpd); + +/* Same, but allow to access any address */ +ssize_t pci_read_vpd_any(struct pci_dev *dev, loff_t pos, size_t count, void *buf) +{ + return __pci_read_vpd(dev, pos, count, buf, false); +} +EXPORT_SYMBOL(pci_read_vpd_any); + +static ssize_t __pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, + const void *buf, bool check_size) +{ ssize_t ret; if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { @@ -397,14 +432,13 @@ ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) if (!dev) return -ENODEV; - ret = pci_vpd_read(dev, pos, count, buf); + ret = pci_vpd_write(dev, pos, count, buf, check_size); pci_dev_put(dev); return ret; } - return pci_vpd_read(dev, pos, count, buf); + return pci_vpd_write(dev, pos, count, buf, check_size); } -EXPORT_SYMBOL(pci_read_vpd); /** * pci_write_vpd - Write entry to Vital Product Data @@ -415,22 +449,17 @@ EXPORT_SYMBOL(pci_read_vpd); */ ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) { - ssize_t ret; - - if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { - dev = pci_get_func0_dev(dev); - if (!dev) - return -ENODEV; - - ret = pci_vpd_write(dev, pos, count, buf); - pci_dev_put(dev); - return ret; - } - - return pci_vpd_write(dev, pos, count, buf); + return __pci_write_vpd(dev, pos, count, buf, true); } EXPORT_SYMBOL(pci_write_vpd); +/* Same, but allow to access any address */ +ssize_t pci_write_vpd_any(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) +{ + return __pci_write_vpd(dev, pos, count, buf, false); +} +EXPORT_SYMBOL(pci_write_vpd_any); + int pci_vpd_find_ro_info_keyword(const void *buf, unsigned int len, const char *kw, unsigned int *size) { diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c index 2156c632524d..d858d25b6cab 100644 --- a/drivers/pci/xen-pcifront.c +++ b/drivers/pci/xen-pcifront.c @@ -588,61 +588,43 @@ static pci_ers_result_t pcifront_common_process(int cmd, struct pcifront_device *pdev, pci_channel_state_t state) { - pci_ers_result_t result; struct pci_driver *pdrv; int bus = pdev->sh_info->aer_op.bus; int devfn = pdev->sh_info->aer_op.devfn; int domain = pdev->sh_info->aer_op.domain; struct pci_dev *pcidev; - int flag = 0; dev_dbg(&pdev->xdev->dev, "pcifront AER process: cmd %x (bus:%x, devfn%x)", cmd, bus, devfn); - result = PCI_ERS_RESULT_NONE; pcidev = pci_get_domain_bus_and_slot(domain, bus, devfn); - if (!pcidev || !pcidev->driver) { + if (!pcidev || !pcidev->dev.driver) { dev_err(&pdev->xdev->dev, "device or AER driver is NULL\n"); pci_dev_put(pcidev); - return result; + return PCI_ERS_RESULT_NONE; } - pdrv = pcidev->driver; - - if (pdrv) { - if (pdrv->err_handler && pdrv->err_handler->error_detected) { - pci_dbg(pcidev, "trying to call AER service\n"); - if (pcidev) { - flag = 1; - switch (cmd) { - case XEN_PCI_OP_aer_detected: - result = pdrv->err_handler-> - error_detected(pcidev, state); - break; - case XEN_PCI_OP_aer_mmio: - result = pdrv->err_handler-> - mmio_enabled(pcidev); - break; - case XEN_PCI_OP_aer_slotreset: - result = pdrv->err_handler-> - slot_reset(pcidev); - break; - case XEN_PCI_OP_aer_resume: - pdrv->err_handler->resume(pcidev); - break; - default: - dev_err(&pdev->xdev->dev, - "bad request in aer recovery " - "operation!\n"); - - } - } + pdrv = to_pci_driver(pcidev->dev.driver); + + if (pdrv->err_handler && pdrv->err_handler->error_detected) { + pci_dbg(pcidev, "trying to call AER service\n"); + switch (cmd) { + case XEN_PCI_OP_aer_detected: + return pdrv->err_handler->error_detected(pcidev, state); + case XEN_PCI_OP_aer_mmio: + return pdrv->err_handler->mmio_enabled(pcidev); + case XEN_PCI_OP_aer_slotreset: + return pdrv->err_handler->slot_reset(pcidev); + case XEN_PCI_OP_aer_resume: + pdrv->err_handler->resume(pcidev); + return PCI_ERS_RESULT_NONE; + default: + dev_err(&pdev->xdev->dev, + "bad request in aer recovery operation!\n"); } } - if (!flag) - result = PCI_ERS_RESULT_NONE; - return result; + return PCI_ERS_RESULT_NONE; } diff --git a/drivers/ssb/pcihost_wrapper.c b/drivers/ssb/pcihost_wrapper.c index 410215c16920..dd70fd41c77d 100644 --- a/drivers/ssb/pcihost_wrapper.c +++ b/drivers/ssb/pcihost_wrapper.c @@ -69,7 +69,6 @@ static int ssb_pcihost_probe(struct pci_dev *dev, { struct ssb_bus *ssb; int err = -ENOMEM; - const char *name; u32 val; ssb = kzalloc(sizeof(*ssb), GFP_KERNEL); @@ -78,10 +77,7 @@ static int ssb_pcihost_probe(struct pci_dev *dev, err = pci_enable_device(dev); if (err) goto err_kfree_ssb; - name = dev_name(&dev->dev); - if (dev->driver && dev->driver->name) - name = dev->driver->name; - err = pci_request_regions(dev, name); + err = pci_request_regions(dev, dev_driver_string(&dev->dev)); if (err) goto err_pci_disable; pci_set_master(dev); diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig index e03627ad4460..59af251e7576 100644 --- a/drivers/staging/Kconfig +++ b/drivers/staging/Kconfig @@ -86,8 +86,6 @@ source "drivers/staging/vc04_services/Kconfig" source "drivers/staging/pi433/Kconfig" -source "drivers/staging/mt7621-pci/Kconfig" - source "drivers/staging/mt7621-dma/Kconfig" source "drivers/staging/ralink-gdma/Kconfig" diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile index c7f8d8d8dd11..76f413470bc8 100644 --- a/drivers/staging/Makefile +++ b/drivers/staging/Makefile @@ -33,7 +33,6 @@ obj-$(CONFIG_KS7010) += ks7010/ obj-$(CONFIG_GREYBUS) += greybus/ obj-$(CONFIG_BCM2835_VCHIQ) += vc04_services/ obj-$(CONFIG_PI433) += pi433/ -obj-$(CONFIG_PCI_MT7621) += mt7621-pci/ obj-$(CONFIG_SOC_MT7621) += mt7621-dma/ obj-$(CONFIG_DMA_RALINK) += ralink-gdma/ obj-$(CONFIG_SOC_MT7621) += mt7621-dts/ diff --git a/drivers/staging/mt7621-pci/Kconfig b/drivers/staging/mt7621-pci/Kconfig deleted file mode 100644 index ce58042f2f21..000000000000 --- a/drivers/staging/mt7621-pci/Kconfig +++ /dev/null @@ -1,8 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0 -config PCI_MT7621 - tristate "MediaTek MT7621 PCI Controller" - depends on RALINK - select PCI_DRIVERS_GENERIC - help - This selects a driver for the MediaTek MT7621 PCI Controller. - diff --git a/drivers/staging/mt7621-pci/Makefile b/drivers/staging/mt7621-pci/Makefile deleted file mode 100644 index f4e651cf7ce3..000000000000 --- a/drivers/staging/mt7621-pci/Makefile +++ /dev/null @@ -1,2 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_PCI_MT7621) += pci-mt7621.o diff --git a/drivers/staging/mt7621-pci/TODO b/drivers/staging/mt7621-pci/TODO deleted file mode 100644 index d674a9ac85c1..000000000000 --- a/drivers/staging/mt7621-pci/TODO +++ /dev/null @@ -1,4 +0,0 @@ - -- general code review and cleanup - -Cc: NeilBrown <neil@brown.name> diff --git a/drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt b/drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt deleted file mode 100644 index 327a68267309..000000000000 --- a/drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt +++ /dev/null @@ -1,104 +0,0 @@ -MediaTek MT7621 PCIe controller - -Required properties: -- compatible: "mediatek,mt7621-pci" -- device_type: Must be "pci" -- reg: Base addresses and lengths of the PCIe subsys and root ports. -- bus-range: Range of bus numbers associated with this controller. -- #address-cells: Address representation for root ports (must be 3) -- pinctrl-names : The pin control state names. -- pinctrl-0: The "default" pinctrl state. -- #size-cells: Size representation for root ports (must be 2) -- ranges: Ranges for the PCI memory and I/O regions. -- #interrupt-cells: Must be 1 -- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties. - Please refer to the standard PCI bus binding document for a more detailed - explanation. -- status: either "disabled" or "okay". -- resets: Must contain an entry for each entry in reset-names. - See ../reset/reset.txt for details. -- reset-names: Must be "pcie0", "pcie1", "pcieN"... based on the number of - root ports. -- clocks: Must contain an entry for each entry in clock-names. - See ../clocks/clock-bindings.txt for details. -- clock-names: Must be "pcie0", "pcie1", "pcieN"... based on the number of - root ports. -- reset-gpios: GPIO specs for the reset pins. - -In addition, the device tree node must have sub-nodes describing each PCIe port -interface, having the following mandatory properties: - -Required properties: -- reg: Only the first four bytes are used to refer to the correct bus number - and device number. -- #address-cells: Must be 3 -- #size-cells: Must be 2 -- ranges: Sub-ranges distributed from the PCIe controller node. An empty - property is sufficient. -- bus-range: Range of bus numbers associated with this port. - -Example for MT7621: - - pcie: pcie@1e140000 { - compatible = "mediatek,mt7621-pci"; - reg = <0x1e140000 0x100 /* host-pci bridge registers */ - 0x1e142000 0x100 /* pcie port 0 RC control registers */ - 0x1e143000 0x100 /* pcie port 1 RC control registers */ - 0x1e144000 0x100>; /* pcie port 2 RC control registers */ - - #address-cells = <3>; - #size-cells = <2>; - - pinctrl-names = "default"; - pinctrl-0 = <&pcie_pins>; - - device_type = "pci"; - - bus-range = <0 255>; - ranges = < - 0x02000000 0 0x00000000 0x60000000 0 0x10000000 /* pci memory */ - 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */ - >; - - #interrupt-cells = <1>; - interrupt-map-mask = <0xF0000 0 0 1>; - interrupt-map = <0x10000 0 0 1 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>, - <0x20000 0 0 1 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>, - <0x30000 0 0 1 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>; - - status = "disabled"; - - resets = <&rstctrl 24 &rstctrl 25 &rstctrl 26>; - reset-names = "pcie0", "pcie1", "pcie2"; - clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>; - clock-names = "pcie0", "pcie1", "pcie2"; - - reset-gpios = <&gpio 19 GPIO_ACTIVE_LOW>, - <&gpio 8 GPIO_ACTIVE_LOW>, - <&gpio 7 GPIO_ACTIVE_LOW>; - - pcie@0,0 { - reg = <0x0000 0 0 0 0>; - #address-cells = <3>; - #size-cells = <2>; - ranges; - bus-range = <0x00 0xff>; - }; - - pcie@1,0 { - reg = <0x0800 0 0 0 0>; - #address-cells = <3>; - #size-cells = <2>; - ranges; - bus-range = <0x00 0xff>; - }; - - pcie@2,0 { - reg = <0x1000 0 0 0 0>; - #address-cells = <3>; - #size-cells = <2>; - ranges; - bus-range = <0x00 0xff>; - }; - }; - diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 1d8a4c089a85..92adf6107864 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -111,7 +111,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) struct xhci_driver_data *driver_data; const struct pci_device_id *id; - id = pci_match_id(pdev->driver->id_table, pdev); + id = pci_match_id(to_pci_driver(pdev->dev.driver)->id_table, pdev); if (id && id->driver_data) { driver_data = (struct xhci_driver_data *)id->driver_data; |