Merge tag 'pci-v6.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:

   - Enable host bridge emulation for PCI_DOMAINS_GENERIC platforms (Dan
     Williams)

   - Switch vmd from custom domain number allocator to the common
     allocator to prevent a potential race with new non-VMD buses (Dan
     Williams)

   - Enable Precision Time Measurement (PTM) only if device advertises
     support for a relevant role, to prevent invalid PTM Requests that
     cause ACS violations that are reported as AER Uncorrectable
     Non-Fatal errors (Mika Westerberg)

  Resource management:

   - Prevent resource tree corruption when BAR resize fails (Ilpo
     Järvinen)

   - Restore BARs to the original size if a BAR resize fails (Ilpo
     Järvinen)

   - Remove BAR release from BAR resize attempts by the xe, i915, and
     amdgpu drivers so the PCI core can restore BARs if the resize fails
     (Ilpo Järvinen)

   - Move Resizable BAR code to rebar.c (Ilpo Järvinen)

   - Add pci_rebar_size_supported() and use it in i915 and xe (Ilpo
     Järvinen)

   - Add pci_rebar_get_max_size() and use it in xe and amdgpu (Ilpo
     Järvinen)

  Power management and error handling:

   - For drivers using PCI legacy suspend, save config state at suspend
     so that state (not any earlier state from enumeration, probe, or
     error recovery) will be restored when resuming (Lukas Wunner)

   - For devices with no driver or a driver that lacks power management,
     save config state at hibernate so that state (not any earlier state
     from enumeration, probe, or error recovery) will be restored when
     resuming (Lukas Wunner)

   - Save device config space on device addition, before driver binding,
     so error recovery works more reliably (Lukas Wunner)

   - Drop pci_save_state() from several drivers that no longer need it
     since the PCI core always does it and pci_restore_state() no longer
     invalidates the saved state (Lukas Wunner)

   - Document use of pci_save_state() by drivers to capture the state
     they want restored during error recovery (Lukas Wunner)

  Power control:

   - Add a struct pci_ops.assert_perst() function pointer to
     assert/deassert PCIe PERST# and implement it for the qcom driver
     (Krishna Chaitanya Chundru)

   - Add DT binding and pwrctrl driver for the Toshiba TC9563 PCIe
     switch, which must be held in reset after poweron so the pwrctrl
     driver can configure the switch via I2C before bringing up the
     links (Krishna Chaitanya Chundru)

  Endpoint framework:

   - Convert the endpoint doorbell test to use a threaded IRQ to fix a
     'sleeping while atomic' issue (Bhanu Seshu Kumar Valluri)

   - Add endpoint VNTB MSI doorbell support to reduce latency between
     host and endpoint (Frank Li)

  New native PCIe controller drivers:

   - Add CIX Sky1 host controller DT binding and driver (Hans Zhang)

   - Add NXP S32G host controller DT binding and driver (Vincent
     Guittot)

   - Add Renesas RZ/G3S host controller DT binding and driver (Claudiu
     Beznea)

   - Add SpacemiT K1 host controller DT binding and driver (Alex Elder)

  Amlogic Meson PCIe controller driver:

   - Update DT binding to name DBI region 'dbi', not 'elbi', and update
     driver to support both (Manivannan Sadhasivam)

  Apple PCIe controller driver:

   - Move struct pci_host_bridge allocation from pci_host_common_init()
     to callers, which significantly simplifies pcie-apple (Marc
     Zyngier)

  Broadcom STB PCIe controller driver:

   - Disable advertising ASPM L0s support correctly (Jim Quinlan)

   - Add a panic/die handler to print diagnostic info in case PCIe
     caused an unrecoverable abort (Jim Quinlan)

  Cadence PCIe controller driver:

   - Add module support for Cadence platform host and endpoint
     controller driver (Manikandan K Pillai)

   - Split headers into 'legacy' (LGA) and 'high perf' (HPA) to prepare
     for new CIX Sky1 driver (Manikandan K Pillai)

  MediaTek PCIe controller driver:

   - Convert DT binding to YAML schema (Christian Marangi)

   - Add Airoha AN7583 DT compatible and driver support (Christian
     Marangi)

  Qualcomm PCIe controller driver:

   - Add Qualcomm Kaanapali to SM8550 DT binding (Qiang Yu)

   - Add required 'power-domains' and 'resets' to qcom sa8775p, sc7280,
     sc8280xp, sm8150, sm8250, sm8350, sm8450, sm8550, x1e80100 DT
     schemas (Krzysztof Kozlowski)

   - Look up OPP using both frequency and data rate (not just frequency)
     so RPMh votes can account for both (Krishna Chaitanya Chundru)

  Rockchip DesignWare PCIe controller driver:

   - Add Rockchip RK3528 compatible strings in DT binding (Yao Zi)

  STMicroelectronics STM32MP25 PCIe controller driver:

   - Fix a race between link training and endpoint register
     initialization (Christian Bruel)

   - Align endpoint allocations to match the ATU requirements (Christian
     Bruel)

  Synopsys DesignWare PCIe controller driver:

   - Clear L1 PM Substate Capability 'Supported' bits unless glue driver
     says it's supported, which prevents users from enabling non-working
     L1SS. Currently only qcom and tegra194 support L1SS (Bjorn Helgaas)

   - Remove now-superfluous L1SS disable code from tegra194 (Bjorn
     Helgaas)

   - Configure L1SS support in dw-rockchip when DT says
     'supports-clkreq' (Shawn Lin)

  TI Keystone PCIe controller driver:

   - Fail the probe instead of silently succeeding if ks_pcie_of_data
     didn't specify Root Complex or Endpoint mode (Siddharth Vadapalli)

   - Make keystone buildable as a loadable module, except on ARM32 where
     hook_fault_code() is __init (Siddharth Vadapalli)"

* tag 'pci-v6.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (100 commits)
  MAINTAINERS: Add Manivannan Sadhasivam as PCI/pwrctrl maintainer
  MAINTAINERS: Add CIX Sky1 PCIe controller driver maintainer
  PCI: sky1: Add PCIe host support for CIX Sky1
  dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
  PCI: cadence: Add support for High Perf Architecture (HPA) controller
  MAINTAINERS: Add NXP S32G PCIe controller driver maintainer
  PCI: s32g: Add NXP S32G PCIe controller driver (RC)
  PCI: dwc: Add register and bitfield definitions
  dt-bindings: PCI: s32g: Add NXP S32G PCIe controller
  PCI: Add Renesas RZ/G3S host controller driver
  PCI: host-generic: Move bridge allocation outside of pci_host_common_init()
  dt-bindings: PCI: Add Renesas RZ/G3S PCIe controller binding
  PCI: Validate pci_rebar_size_supported() input
  Documentation: PCI: Amend error recovery doc with pci_save_state() rules
  treewide: Drop pci_save_state() after pci_restore_state()
  PCI/ERR: Ensure error recoverability at all times
  PCI/PM: Stop needlessly clearing state_saved on enumeration and thaw
  PCI/PM: Reinstate clearing state_saved in legacy and !PM codepaths
  PCI: dw-rockchip: Configure L1SS support
  PCI: tegra194: Remove unnecessary L1SS disable code
  ...
This commit is contained in:
Linus Torvalds
2025-12-04 17:29:41 -08:00
126 changed files with 7905 additions and 1672 deletions

View File

@@ -326,6 +326,21 @@ be recovered, there is nothing more that can be done; the platform
will typically report a "permanent failure" in such a case. The
device will be considered "dead" in this case.
Drivers typically need to call pci_restore_state() after reset to
re-initialize the device's config space registers and thereby
bring it from D0\ :sub:`uninitialized` into D0\ :sub:`active` state
(PCIe r7.0 sec 5.3.1.1). The PCI core invokes pci_save_state()
on enumeration after initializing config space to ensure that a
saved state is available for subsequent error recovery.
Drivers which modify config space on probe may need to invoke
pci_save_state() afterwards to record those changes for later
error recovery. When going into system suspend, pci_save_state()
is called for every PCI device and that state will be restored
not only on resume, but also on any subsequent error recovery.
In the unlikely event that the saved state recorded on suspend
is unsuitable for error recovery, drivers should call
pci_save_state() on resume.
Drivers for multi-function cards will need to coordinate among
themselves as to which driver instance will perform any "one-shot"
or global device initialization. For example, the Symbios sym53cxx2

View File

@@ -20,9 +20,10 @@ allOf:
select:
properties:
compatible:
enum:
- amlogic,axg-pcie
- amlogic,g12a-pcie
contains:
enum:
- amlogic,axg-pcie
- amlogic,g12a-pcie
required:
- compatible
@@ -36,13 +37,13 @@ properties:
reg:
items:
- description: External local bus interface registers
- description: Data Bus Interface registers
- description: Meson designed configuration registers
- description: PCIe configuration space
reg-names:
items:
- const: elbi
- const: dbi
- const: cfg
- const: config
@@ -51,15 +52,15 @@ properties:
clocks:
items:
- description: PCIe PHY clock
- description: PCIe GEN 100M PLL clock
- description: PCIe RC clock gate
- description: PCIe PHY clock
clock-names:
items:
- const: general
- const: pclk
- const: port
- const: general
phys:
maxItems: 1
@@ -88,7 +89,7 @@ required:
- reg
- reg-names
- interrupts
- clock
- clocks
- clock-names
- "#address-cells"
- "#size-cells"
@@ -113,10 +114,10 @@ examples:
pcie: pcie@f9800000 {
compatible = "amlogic,axg-pcie", "snps,dw-pcie";
reg = <0xf9800000 0x400000>, <0xff646000 0x2000>, <0xf9f00000 0x100000>;
reg-names = "elbi", "cfg", "config";
reg-names = "dbi", "cfg", "config";
interrupts = <GIC_SPI 177 IRQ_TYPE_EDGE_RISING>;
clocks = <&pclk>, <&clk_port>, <&clk_phy>;
clock-names = "pclk", "port", "general";
clocks = <&clk_phy>, <&pclk>, <&clk_port>;
clock-names = "general", "pclk", "port";
resets = <&reset_pcie_port>, <&reset_pcie_apb>;
reset-names = "port", "apb";
phys = <&pcie_phy>;

View File

@@ -0,0 +1,83 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/cix,sky1-pcie-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: CIX Sky1 PCIe Root Complex
maintainers:
- Hans Zhang <hans.zhang@cixtech.com>
description:
PCIe root complex controller based on the Cadence PCIe core.
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
const: cix,sky1-pcie-host
reg:
items:
- description: PCIe controller registers.
- description: ECAM registers.
- description: Remote CIX System Unit strap registers.
- description: Remote CIX System Unit status registers.
- description: Region for sending messages registers.
reg-names:
items:
- const: reg
- const: cfg
- const: rcsu_strap
- const: rcsu_status
- const: msg
ranges:
maxItems: 3
required:
- compatible
- ranges
- bus-range
- device_type
- interrupt-map
- interrupt-map-mask
- msi-map
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@a010000 {
compatible = "cix,sky1-pcie-host";
reg = <0x00 0x0a010000 0x00 0x10000>,
<0x00 0x2c000000 0x00 0x4000000>,
<0x00 0x0a000300 0x00 0x100>,
<0x00 0x0a000400 0x00 0x100>,
<0x00 0x60000000 0x00 0x00100000>;
reg-names = "reg", "cfg", "rcsu_strap", "rcsu_status", "msg";
ranges = <0x01000000 0x00 0x60100000 0x00 0x60100000 0x00 0x00100000>,
<0x02000000 0x00 0x60200000 0x00 0x60200000 0x00 0x1fe00000>,
<0x43000000 0x18 0x00000000 0x18 0x00000000 0x04 0x00000000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0xc0 0xff>;
device_type = "pci";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH 0>,
<0 0 0 2 &gic 0 0 GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH 0>,
<0 0 0 3 &gic 0 0 GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH 0>,
<0 0 0 4 &gic 0 0 GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH 0>;
msi-map = <0xc000 &gic_its 0xc000 0x4000>;
};
};

View File

@@ -0,0 +1,164 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/mediatek-pcie-mt7623.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: PCIe controller on MediaTek SoCs
maintainers:
- Christian Marangi <ansuelsmth@gmail.com>
properties:
compatible:
enum:
- mediatek,mt2701-pcie
- mediatek,mt7623-pcie
reg:
minItems: 4
maxItems: 4
reg-names:
items:
- const: subsys
- const: port0
- const: port1
- const: port2
clocks:
minItems: 4
maxItems: 4
clock-names:
items:
- const: free_ck
- const: sys_ck0
- const: sys_ck1
- const: sys_ck2
resets:
minItems: 3
maxItems: 3
reset-names:
items:
- const: pcie-rst0
- const: pcie-rst1
- const: pcie-rst2
phys:
minItems: 3
maxItems: 3
phy-names:
items:
- const: pcie-phy0
- const: pcie-phy1
- const: pcie-phy2
power-domains:
maxItems: 1
required:
- compatible
- reg
- reg-names
- ranges
- clocks
- clock-names
- '#interrupt-cells'
- resets
- reset-names
- phys
- phy-names
- power-domains
- pcie@0,0
- pcie@1,0
- pcie@2,0
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
unevaluatedProperties: false
examples:
# MT7623
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/clock/mt2701-clk.h>
#include <dt-bindings/reset/mt2701-resets.h>
#include <dt-bindings/phy/phy.h>
#include <dt-bindings/power/mt2701-power.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@1a140000 {
compatible = "mediatek,mt7623-pcie";
device_type = "pci";
reg = <0 0x1a140000 0 0x1000>, /* PCIe shared registers */
<0 0x1a142000 0 0x1000>, /* Port0 registers */
<0 0x1a143000 0 0x1000>, /* Port1 registers */
<0 0x1a144000 0 0x1000>; /* Port2 registers */
reg-names = "subsys", "port0", "port1", "port2";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0xf800 0 0 0>;
interrupt-map = <0x0000 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>,
<0x0800 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>,
<0x1000 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>;
clocks = <&topckgen CLK_TOP_ETHIF_SEL>,
<&hifsys CLK_HIFSYS_PCIE0>,
<&hifsys CLK_HIFSYS_PCIE1>,
<&hifsys CLK_HIFSYS_PCIE2>;
clock-names = "free_ck", "sys_ck0", "sys_ck1", "sys_ck2";
resets = <&hifsys MT2701_HIFSYS_PCIE0_RST>,
<&hifsys MT2701_HIFSYS_PCIE1_RST>,
<&hifsys MT2701_HIFSYS_PCIE2_RST>;
reset-names = "pcie-rst0", "pcie-rst1", "pcie-rst2";
phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>,
<&pcie2_phy PHY_TYPE_PCIE>;
phy-names = "pcie-phy0", "pcie-phy1", "pcie-phy2";
power-domains = <&scpsys MT2701_POWER_DOMAIN_HIF>;
bus-range = <0x00 0xff>;
ranges = <0x81000000 0 0x1a160000 0 0x1a160000 0 0x00010000>, /* I/O space */
<0x83000000 0 0x60000000 0 0x60000000 0 0x10000000>; /* memory space */
pcie@0,0 {
device_type = "pci";
reg = <0x0000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>;
ranges;
};
pcie@1,0 {
device_type = "pci";
reg = <0x0800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>;
ranges;
};
pcie@2,0 {
device_type = "pci";
reg = <0x1000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>;
ranges;
};
};
};

View File

@@ -1,289 +0,0 @@
MediaTek Gen2 PCIe controller
Required properties:
- compatible: Should contain one of the following strings:
"mediatek,mt2701-pcie"
"mediatek,mt2712-pcie"
"mediatek,mt7622-pcie"
"mediatek,mt7623-pcie"
"mediatek,mt7629-pcie"
"airoha,en7523-pcie"
- device_type: Must be "pci"
- reg: Base addresses and lengths of the root ports.
- reg-names: Names of the above areas to use during resource lookup.
- #address-cells: Address representation for root ports (must be 3)
- #size-cells: Size representation for root ports (must be 2)
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names:
Mandatory entries:
- sys_ckN :transaction layer and data link layer clock
Required entries for MT2701/MT7623:
- free_ck :for reference clock of PCIe subsys
Required entries for MT2712/MT7622:
- ahb_ckN :AHB slave interface operating clock for CSR access and RC
initiated MMIO access
Required entries for MT7622:
- axi_ckN :application layer MMIO channel operating clock
- aux_ckN :pe2_mac_bridge and pe2_mac_core operating clock when
pcie_mac_ck/pcie_pipe_ck is turned off
- obff_ckN :OBFF functional block operating clock
- pipe_ckN :LTSSM and PHY/MAC layer operating clock
where N starting from 0 to one less than the number of root ports.
- phys: List of PHY specifiers (used by generic PHY framework).
- phy-names : Must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the
number of PHYs as specified in *phys* property.
- power-domains: A phandle and power domain specifier pair to the power domain
which is responsible for collapsing and restoring power to the peripheral.
- bus-range: Range of bus numbers associated with this controller.
- ranges: Ranges for the PCI memory and I/O regions.
Required properties for MT7623/MT2701:
- #interrupt-cells: Size representation for interrupts (must be 1)
- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
Please refer to the standard PCI bus binding document for a more detailed
explanation.
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must be "pcie-rst0", "pcie-rst1", "pcie-rstN".. based on the
number of root ports.
Required properties for MT2712/MT7622/MT7629:
-interrupts: A list of interrupt outputs of the controller, must have one
entry for each PCIe port
- interrupt-names: Must include the following entries:
- "pcie_irq": The interrupt that is asserted when an MSI/INTX is received
- linux,pci-domain: PCI domain ID. Should be unique for each host controller
In addition, the device tree node must have sub-nodes describing each
PCIe port interface, having the following mandatory properties:
Required properties:
- device_type: Must be "pci"
- reg: Only the first four bytes are used to refer to the correct bus number
and device number.
- #address-cells: Must be 3
- #size-cells: Must be 2
- #interrupt-cells: Must be 1
- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
Please refer to the standard PCI bus binding document for a more detailed
explanation.
- ranges: Sub-ranges distributed from the PCIe controller node. An empty
property is sufficient.
Examples for MT7623:
hifsys: syscon@1a000000 {
compatible = "mediatek,mt7623-hifsys",
"mediatek,mt2701-hifsys",
"syscon";
reg = <0 0x1a000000 0 0x1000>;
#clock-cells = <1>;
#reset-cells = <1>;
};
pcie: pcie@1a140000 {
compatible = "mediatek,mt7623-pcie";
device_type = "pci";
reg = <0 0x1a140000 0 0x1000>, /* PCIe shared registers */
<0 0x1a142000 0 0x1000>, /* Port0 registers */
<0 0x1a143000 0 0x1000>, /* Port1 registers */
<0 0x1a144000 0 0x1000>; /* Port2 registers */
reg-names = "subsys", "port0", "port1", "port2";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0xf800 0 0 0>;
interrupt-map = <0x0000 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>,
<0x0800 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>,
<0x1000 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>;
clocks = <&topckgen CLK_TOP_ETHIF_SEL>,
<&hifsys CLK_HIFSYS_PCIE0>,
<&hifsys CLK_HIFSYS_PCIE1>,
<&hifsys CLK_HIFSYS_PCIE2>;
clock-names = "free_ck", "sys_ck0", "sys_ck1", "sys_ck2";
resets = <&hifsys MT2701_HIFSYS_PCIE0_RST>,
<&hifsys MT2701_HIFSYS_PCIE1_RST>,
<&hifsys MT2701_HIFSYS_PCIE2_RST>;
reset-names = "pcie-rst0", "pcie-rst1", "pcie-rst2";
phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>,
<&pcie2_phy PHY_TYPE_PCIE>;
phy-names = "pcie-phy0", "pcie-phy1", "pcie-phy2";
power-domains = <&scpsys MT2701_POWER_DOMAIN_HIF>;
bus-range = <0x00 0xff>;
ranges = <0x81000000 0 0x1a160000 0 0x1a160000 0 0x00010000 /* I/O space */
0x83000000 0 0x60000000 0 0x60000000 0 0x10000000>; /* memory space */
pcie@0,0 {
reg = <0x0000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>;
ranges;
};
pcie@1,0 {
reg = <0x0800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>;
ranges;
};
pcie@2,0 {
reg = <0x1000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>;
ranges;
};
};
Examples for MT2712:
pcie1: pcie@112ff000 {
compatible = "mediatek,mt2712-pcie";
device_type = "pci";
reg = <0 0x112ff000 0 0x1000>;
reg-names = "port1";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
clocks = <&topckgen CLK_TOP_PE2_MAC_P1_SEL>,
<&pericfg CLK_PERI_PCIE1>;
clock-names = "sys_ck1", "ahb_ck1";
phys = <&u3port1 PHY_TYPE_PCIE>;
phy-names = "pcie-phy1";
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x11400000 0x0 0x11400000 0 0x300000>;
status = "disabled";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
pcie0: pcie@11700000 {
compatible = "mediatek,mt2712-pcie";
device_type = "pci";
reg = <0 0x11700000 0 0x1000>;
reg-names = "port0";
linux,pci-domain = <0>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
clocks = <&topckgen CLK_TOP_PE2_MAC_P0_SEL>,
<&pericfg CLK_PERI_PCIE0>;
clock-names = "sys_ck0", "ahb_ck0";
phys = <&u3port0 PHY_TYPE_PCIE>;
phy-names = "pcie-phy0";
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>;
status = "disabled";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
Examples for MT7622:
pcie0: pcie@1a143000 {
compatible = "mediatek,mt7622-pcie";
device_type = "pci";
reg = <0 0x1a143000 0 0x1000>;
reg-names = "port0";
linux,pci-domain = <0>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "pcie_irq";
clocks = <&pciesys CLK_PCIE_P0_MAC_EN>,
<&pciesys CLK_PCIE_P0_AHB_EN>,
<&pciesys CLK_PCIE_P0_AUX_EN>,
<&pciesys CLK_PCIE_P0_AXI_EN>,
<&pciesys CLK_PCIE_P0_OBFF_EN>,
<&pciesys CLK_PCIE_P0_PIPE_EN>;
clock-names = "sys_ck0", "ahb_ck0", "aux_ck0",
"axi_ck0", "obff_ck0", "pipe_ck0";
power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x8000000>;
status = "disabled";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
pcie1: pcie@1a145000 {
compatible = "mediatek,mt7622-pcie";
device_type = "pci";
reg = <0 0x1a145000 0 0x1000>;
reg-names = "port1";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "pcie_irq";
clocks = <&pciesys CLK_PCIE_P1_MAC_EN>,
/* designer has connect RC1 with p0_ahb clock */
<&pciesys CLK_PCIE_P0_AHB_EN>,
<&pciesys CLK_PCIE_P1_AUX_EN>,
<&pciesys CLK_PCIE_P1_AXI_EN>,
<&pciesys CLK_PCIE_P1_OBFF_EN>,
<&pciesys CLK_PCIE_P1_PIPE_EN>;
clock-names = "sys_ck1", "ahb_ck1", "aux_ck1",
"axi_ck1", "obff_ck1", "pipe_ck1";
power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x28000000 0x0 0x28000000 0 0x8000000>;
status = "disabled";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};

View File

@@ -0,0 +1,438 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/mediatek-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: PCIe controller on MediaTek SoCs
maintainers:
- Christian Marangi <ansuelsmth@gmail.com>
properties:
compatible:
oneOf:
- enum:
- airoha,an7583-pcie
- mediatek,mt2712-pcie
- mediatek,mt7622-pcie
- mediatek,mt7629-pcie
- items:
- const: airoha,en7523-pcie
- const: mediatek,mt7622-pcie
reg:
maxItems: 1
reg-names:
enum: [ port0, port1 ]
clocks:
minItems: 1
maxItems: 6
clock-names:
minItems: 1
items:
- enum: [ sys_ck0, sys_ck1 ]
- enum: [ ahb_ck0, ahb_ck1 ]
- enum: [ aux_ck0, aux_ck1 ]
- enum: [ axi_ck0, axi_ck1 ]
- enum: [ obff_ck0, obff_ck1 ]
- enum: [ pipe_ck0, pipe_ck1 ]
resets:
maxItems: 1
reset-names:
const: pcie-rst1
interrupts:
maxItems: 1
interrupt-names:
const: pcie_irq
phys:
maxItems: 1
phy-names:
enum: [ pcie-phy0, pcie-phy1 ]
power-domains:
maxItems: 1
mediatek,pbus-csr:
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to pbus-csr syscon
- description: offset of pbus-csr base address register
- description: offset of pbus-csr base address mask register
description:
Phandle with two arguments to the syscon node used to detect if
a given address is accessible on PCIe controller.
'#interrupt-cells':
const: 1
interrupt-controller:
description: Interrupt controller node for handling legacy PCI interrupts.
type: object
properties:
'#address-cells':
const: 0
'#interrupt-cells':
const: 1
interrupt-controller: true
required:
- '#address-cells'
- '#interrupt-cells'
- interrupt-controller
additionalProperties: false
required:
- compatible
- reg
- reg-names
- ranges
- clocks
- clock-names
- '#interrupt-cells'
- interrupts
- interrupt-names
- interrupt-controller
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
- if:
properties:
compatible:
const: airoha,an7583-pcie
then:
properties:
reg-names:
const: port1
clocks:
maxItems: 1
clock-names:
const: sys_ck1
phy-names:
const: pcie-phy1
power-domain: false
required:
- resets
- reset-names
- phys
- phy-names
- mediatek,pbus-csr
- if:
properties:
compatible:
const: mediatek,mt2712-pcie
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
minItems: 2
maxItems: 2
reset: false
reset-names: false
power-domains: false
mediatek,pbus-csr: false
required:
- phys
- phy-names
- if:
properties:
compatible:
const: mediatek,mt7622-pcie
then:
properties:
clocks:
minItems: 6
reset: false
reset-names: false
phys: false
phy-names: false
mediatek,pbus-csr: false
required:
- power-domains
- if:
properties:
compatible:
const: mediatek,mt7629-pcie
then:
properties:
clocks:
minItems: 6
reset: false
reset-names: false
mediatek,pbus-csr: false
required:
- power-domains
- if:
properties:
compatible:
contains:
const: airoha,en7523-pcie
then:
properties:
clocks:
maxItems: 1
clock-names:
maxItems: 1
reset: false
reset-names: false
phys: false
phy-names: false
power-domain: false
mediatek,pbus-csr: false
unevaluatedProperties: false
examples:
# MT2712
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/phy/phy.h>
soc_1 {
#address-cells = <2>;
#size-cells = <2>;
pcie@112ff000 {
compatible = "mediatek,mt2712-pcie";
device_type = "pci";
reg = <0 0x112ff000 0 0x1000>;
reg-names = "port1";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
clocks = <&topckgen>, /* CLK_TOP_PE2_MAC_P1_SEL */
<&pericfg>; /* CLK_PERI_PCIE1 */
clock-names = "sys_ck1", "ahb_ck1";
phys = <&u3port1 PHY_TYPE_PCIE>;
phy-names = "pcie-phy1";
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x11400000 0x0 0x11400000 0 0x300000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
pcie@11700000 {
compatible = "mediatek,mt2712-pcie";
device_type = "pci";
reg = <0 0x11700000 0 0x1000>;
reg-names = "port0";
linux,pci-domain = <0>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
clocks = <&topckgen>, /* CLK_TOP_PE2_MAC_P0_SEL */
<&pericfg>; /* CLK_PERI_PCIE0 */
clock-names = "sys_ck0", "ahb_ck0";
phys = <&u3port0 PHY_TYPE_PCIE>;
phy-names = "pcie-phy0";
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
};
# MT7622
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/power/mt7622-power.h>
soc_2 {
#address-cells = <2>;
#size-cells = <2>;
pcie@1a143000 {
compatible = "mediatek,mt7622-pcie";
device_type = "pci";
reg = <0 0x1a143000 0 0x1000>;
reg-names = "port0";
linux,pci-domain = <0>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "pcie_irq";
clocks = <&pciesys>, /* CLK_PCIE_P0_MAC_EN */
<&pciesys>, /* CLK_PCIE_P0_AHB_EN */
<&pciesys>, /* CLK_PCIE_P0_AUX_EN */
<&pciesys>, /* CLK_PCIE_P0_AXI_EN */
<&pciesys>, /* CLK_PCIE_P0_OBFF_EN */
<&pciesys>; /* CLK_PCIE_P0_PIPE_EN */
clock-names = "sys_ck0", "ahb_ck0", "aux_ck0",
"axi_ck0", "obff_ck0", "pipe_ck0";
power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x8000000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0_1 0>,
<0 0 0 2 &pcie_intc0_1 1>,
<0 0 0 3 &pcie_intc0_1 2>,
<0 0 0 4 &pcie_intc0_1 3>;
pcie_intc0_1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
pcie@1a145000 {
compatible = "mediatek,mt7622-pcie";
device_type = "pci";
reg = <0 0x1a145000 0 0x1000>;
reg-names = "port1";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "pcie_irq";
clocks = <&pciesys>, /* CLK_PCIE_P1_MAC_EN */
/* designer has connect RC1 with p0_ahb clock */
<&pciesys>, /* CLK_PCIE_P0_AHB_EN */
<&pciesys>, /* CLK_PCIE_P1_AUX_EN */
<&pciesys>, /* CLK_PCIE_P1_AXI_EN */
<&pciesys>, /* CLK_PCIE_P1_OBFF_EN */
<&pciesys>; /* CLK_PCIE_P1_PIPE_EN */
clock-names = "sys_ck1", "ahb_ck1", "aux_ck1",
"axi_ck1", "obff_ck1", "pipe_ck1";
power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x28000000 0x0 0x28000000 0 0x8000000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1_1 0>,
<0 0 0 2 &pcie_intc1_1 1>,
<0 0 0 3 &pcie_intc1_1 2>,
<0 0 0 4 &pcie_intc1_1 3>;
pcie_intc1_1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
};
# AN7583
- |
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/en7523-clk.h>
soc_3 {
#address-cells = <2>;
#size-cells = <2>;
pcie@1fa92000 {
compatible = "airoha,an7583-pcie";
device_type = "pci";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
reg = <0x0 0x1fa92000 0x0 0x1670>;
reg-names = "port1";
clocks = <&scuclk EN7523_CLK_PCIE>;
clock-names = "sys_ck1";
phys = <&pciephy>;
phy-names = "pcie-phy1";
ranges = <0x02000000 0 0x24000000 0x0 0x24000000 0 0x4000000>;
resets = <&scuclk>; /* AN7583_PCIE1_RST */
reset-names = "pcie-rst1";
mediatek,pbus-csr = <&pbus_csr 0x8 0xc>;
interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
bus-range = <0x00 0xff>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1_4: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
};

View File

@@ -0,0 +1,130 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/nxp,s32g-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NXP S32G2xxx/S32G3xxx PCIe Root Complex controller
maintainers:
- Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
- Ionut Vicovan <ionut.vicovan@nxp.com>
description:
This PCIe controller is based on the Synopsys DesignWare PCIe IP.
The S32G SoC family has two PCIe controllers, which can be configured as
either Root Complex or Endpoint.
properties:
compatible:
oneOf:
- enum:
- nxp,s32g2-pcie
- items:
- const: nxp,s32g3-pcie
- const: nxp,s32g2-pcie
reg:
maxItems: 6
reg-names:
items:
- const: dbi
- const: dbi2
- const: atu
- const: dma
- const: ctrl
- const: config
interrupts:
minItems: 1
maxItems: 2
interrupt-names:
items:
- const: msi
- const: dma
minItems: 1
pcie@0:
description:
Describe the S32G Root Port.
type: object
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
phys:
maxItems: 1
required:
- reg
- phys
unevaluatedProperties: false
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
- ranges
- pcie@0
allOf:
- $ref: /schemas/pci/snps,dw-pcie.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/phy/phy.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie@40400000 {
compatible = "nxp,s32g3-pcie", "nxp,s32g2-pcie";
reg = <0x00 0x40400000 0x0 0x00001000>, /* dbi registers */
<0x00 0x40420000 0x0 0x00001000>, /* dbi2 registers */
<0x00 0x40460000 0x0 0x00001000>, /* atu registers */
<0x00 0x40470000 0x0 0x00001000>, /* dma registers */
<0x00 0x40481000 0x0 0x000000f8>, /* ctrl registers */
<0x5f 0xffffe000 0x0 0x00002000>; /* config space */
reg-names = "dbi", "dbi2", "atu", "dma", "ctrl", "config";
dma-coherent;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges =
<0x01000000 0x0 0x00000000 0x5f 0xfffe0000 0x0 0x00010000>,
<0x02000000 0x0 0x00000000 0x58 0x00000000 0x0 0x80000000>,
<0x02000000 0x1 0x00000000 0x59 0x00000000 0x6 0xfffe0000>;
bus-range = <0x0 0xff>;
interrupts = <GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi", "dma";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &gic 0 0 GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &gic 0 0 GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &gic 0 0 GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>;
pcie@0 {
reg = <0x0 0x0 0x0 0x0 0x0>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
device_type = "pci";
phys = <&serdes0 PHY_TYPE_PCIE 0 0>;
};
};
};

View File

@@ -11,7 +11,7 @@ description: |
maintainers:
- Kishon Vijay Abraham I <kishon@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
$nodename:

View File

@@ -8,7 +8,7 @@ title: Qualcomm PCI Express Root Complex Common Properties
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
reg:

View File

@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm PCIe Endpoint Controller
maintainers:
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:

View File

@@ -8,7 +8,7 @@ title: Qualcomm SA8255p based firmware managed and ECAM compliant PCIe Root Comp
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SA8255p SoC PCIe root complex controller is based on the Synopsys

View File

@@ -8,7 +8,7 @@ title: Qualcomm SA8775p PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SA8775p SoC PCIe root complex controller is based on the Synopsys
@@ -78,6 +78,9 @@ properties:
required:
- interconnects
- interconnect-names
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SC7280 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SC7280 SoC PCIe root complex controller is based on the Synopsys
@@ -76,6 +76,11 @@ properties:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SC8180x PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SC8180x SoC PCIe root complex controller is based on the Synopsys

View File

@@ -8,7 +8,7 @@ title: Qualcomm SC8280XP PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SC8280XP SoC PCIe root complex controller is based on the Synopsys
@@ -61,6 +61,9 @@ properties:
required:
- interconnects
- interconnect-names
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SM8150 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SM8150 SoC PCIe root complex controller is based on the Synopsys
@@ -74,6 +74,11 @@ properties:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SM8250 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SM8250 SoC PCIe root complex controller is based on the Synopsys
@@ -83,6 +83,11 @@ properties:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SM8350 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SM8350 SoC PCIe root complex controller is based on the Synopsys
@@ -73,6 +73,11 @@ properties:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SM8450 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SM8450 SoC PCIe root complex controller is based on the Synopsys
@@ -77,6 +77,11 @@ properties:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm SM8550 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SM8550 SoC (and compatible) PCIe root complex controller is based on
@@ -20,6 +20,7 @@ properties:
- const: qcom,pcie-sm8550
- items:
- enum:
- qcom,kaanapali-pcie
- qcom,sar2130p-pcie
- qcom,pcie-sm8650
- qcom,pcie-sm8750
@@ -83,6 +84,11 @@ properties:
- const: pci # PCIe core reset
- const: link_down # PCIe link down reset
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm X1E80100 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm X1E80100 SoC (and compatible) PCIe root complex controller is based on
@@ -73,6 +73,11 @@ properties:
- const: pci # PCIe core reset
- const: link_down # PCIe link down reset
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#

View File

@@ -8,7 +8,7 @@ title: Qualcomm PCI express root complex
maintainers:
- Bjorn Andersson <bjorn.andersson@linaro.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description: |
Qualcomm PCIe root complex controller is based on the Synopsys DesignWare

View File

@@ -0,0 +1,249 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/renesas,r9a08g045-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas RZ/G3S PCIe host controller
maintainers:
- Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
description:
Renesas RZ/G3S PCIe host controller complies with PCIe Base Specification
4.0 and supports up to 5 GT/s (Gen2).
properties:
compatible:
const: renesas,r9a08g045-pcie # RZ/G3S
reg:
maxItems: 1
interrupts:
items:
- description: System error interrupt
- description: System error on correctable error interrupt
- description: System error on non-fatal error interrupt
- description: System error on fatal error interrupt
- description: AXI error interrupt
- description: INTA interrupt
- description: INTB interrupt
- description: INTC interrupt
- description: INTD interrupt
- description: MSI interrupt
- description: Link bandwidth interrupt
- description: PME interrupt
- description: DMA interrupt
- description: PCIe event interrupt
- description: Message interrupt
- description: All interrupts
interrupt-names:
items:
- description: serr
- description: ser_cor
- description: serr_nonfatal
- description: serr_fatal
- description: axi_err
- description: inta
- description: intb
- description: intc
- description: intd
- description: msi
- description: link_bandwidth
- description: pm_pme
- description: dma
- description: pcie_evt
- description: msg
- description: all
interrupt-controller: true
clocks:
items:
- description: System clock
- description: PM control clock
clock-names:
items:
- description: aclk
- description: pm
resets:
items:
- description: AXI2PCIe Bridge reset
- description: Data link layer/transaction layer reset
- description: Transaction layer (ACLK domain) reset
- description: Transaction layer (PCLK domain) reset
- description: Physical layer reset
- description: Configuration register reset
- description: Configuration register reset
reset-names:
items:
- description: aresetn
- description: rst_b
- description: rst_gp_b
- description: rst_ps_b
- description: rst_rsm_b
- description: rst_cfg_b
- description: rst_load_b
power-domains:
maxItems: 1
dma-ranges:
description:
A single range for the inbound memory region.
maxItems: 1
renesas,sysc:
description: |
System controller registers control and monitor various PCIe
functionalities.
Control:
- transition to L1 state
- receiver termination settings
- RST_RSM_B signal
Monitor:
- clkl1pm clock request state
- power off information in L2 state
- errors (fatal, non-fatal, correctable)
$ref: /schemas/types.yaml#/definitions/phandle
patternProperties:
"^pcie@0,[0-0]$":
type: object
allOf:
- $ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
vendor-id:
const: 0x1912
device-id:
const: 0x0033
clocks:
items:
- description: Reference clock
clock-names:
items:
- const: ref
required:
- device_type
- vendor-id
- device-id
- clocks
- clock-names
unevaluatedProperties: false
required:
- compatible
- reg
- clocks
- clock-names
- resets
- reset-names
- interrupts
- interrupt-names
- interrupt-map
- interrupt-map-mask
- interrupt-controller
- power-domains
- "#address-cells"
- "#size-cells"
- "#interrupt-cells"
- renesas,sysc
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/r9a08g045-cpg.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie@11e40000 {
compatible = "renesas,r9a08g045-pcie";
reg = <0 0x11e40000 0 0x10000>;
ranges = <0x02000000 0 0x30000000 0 0x30000000 0 0x08000000>;
/* Map all possible DRAM ranges (4 GB). */
dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 1 0x00000000>;
bus-range = <0x0 0xff>;
interrupts = <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "serr", "serr_cor", "serr_nonfatal",
"serr_fatal", "axi_err", "inta",
"intb", "intc", "intd", "msi",
"link_bandwidth", "pm_pme", "dma",
"pcie_evt", "msg", "all";
#interrupt-cells = <1>;
interrupt-controller;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie 0 0 0 0>, /* INTA */
<0 0 0 2 &pcie 0 0 0 1>, /* INTB */
<0 0 0 3 &pcie 0 0 0 2>, /* INTC */
<0 0 0 4 &pcie 0 0 0 3>; /* INTD */
clocks = <&cpg CPG_MOD R9A08G045_PCI_ACLK>,
<&cpg CPG_MOD R9A08G045_PCI_CLKL1PM>;
clock-names = "aclk", "pm";
resets = <&cpg R9A08G045_PCI_ARESETN>,
<&cpg R9A08G045_PCI_RST_B>,
<&cpg R9A08G045_PCI_RST_GP_B>,
<&cpg R9A08G045_PCI_RST_PS_B>,
<&cpg R9A08G045_PCI_RST_RSM_B>,
<&cpg R9A08G045_PCI_RST_CFG_B>,
<&cpg R9A08G045_PCI_RST_LOAD_B>;
reset-names = "aresetn", "rst_b", "rst_gp_b", "rst_ps_b",
"rst_rsm_b", "rst_cfg_b", "rst_load_b";
power-domains = <&cpg>;
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
renesas,sysc = <&sysc>;
pcie@0,0 {
reg = <0x0 0x0 0x0 0x0 0x0>;
ranges;
clocks = <&versa3 5>;
clock-names = "ref";
device_type = "pci";
vendor-id = <0x1912>;
device-id = <0x0033>;
#address-cells = <3>;
#size-cells = <2>;
};
};
};
...

View File

@@ -22,6 +22,7 @@ properties:
- const: rockchip,rk3568-pcie
- items:
- enum:
- rockchip,rk3528-pcie
- rockchip,rk3562-pcie
- rockchip,rk3576-pcie
- rockchip,rk3588-pcie
@@ -78,6 +79,7 @@ allOf:
compatible:
contains:
enum:
- rockchip,rk3528-pcie
- rockchip,rk3562-pcie
- rockchip,rk3576-pcie
then:
@@ -89,6 +91,7 @@ allOf:
compatible:
contains:
enum:
- rockchip,rk3528-pcie
- rockchip,rk3562-pcie
- rockchip,rk3576-pcie
then:

View File

@@ -115,11 +115,11 @@ properties:
above for new bindings.
oneOf:
- description: See native 'dbi' clock for details
enum: [ pcie, pcie_apb_sys, aclk_dbi, reg ]
enum: [ pcie, pcie_apb_sys, aclk_dbi, reg, port ]
- description: See native 'mstr/slv' clock for details
enum: [ pcie_bus, pcie_inbound_axi, pcie_aclk, aclk_mst, aclk_slv ]
- description: See native 'pipe' clock for details
enum: [ pcie_phy, pcie_phy_ref, link ]
enum: [ pcie_phy, pcie_phy_ref, link, general ]
- description: See native 'aux' clock for details
enum: [ pcie_aux ]
- description: See native 'ref' clock for details.
@@ -176,7 +176,7 @@ properties:
- description: See native 'phy' reset for details
enum: [ pciephy, link ]
- description: See native 'pwr' reset for details
enum: [ turnoff ]
enum: [ turnoff, port ]
phys:
description:

View File

@@ -0,0 +1,157 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/spacemit,k1-pcie-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SpacemiT K1 PCI Express Host Controller
maintainers:
- Alex Elder <elder@riscstar.com>
description: >
The SpacemiT K1 SoC PCIe host controller is based on the Synopsys DesignWare
PCIe IP. The controller uses the DesignWare built-in MSI interrupt
controller, and supports 256 MSIs.
allOf:
- $ref: /schemas/pci/snps,dw-pcie.yaml#
properties:
compatible:
const: spacemit,k1-pcie
reg:
items:
- description: DesignWare PCIe registers
- description: ATU address space
- description: PCIe configuration space
- description: Link control registers
reg-names:
items:
- const: dbi
- const: atu
- const: config
- const: link
clocks:
items:
- description: DWC PCIe Data Bus Interface (DBI) clock
- description: DWC PCIe application AXI-bus master interface clock
- description: DWC PCIe application AXI-bus slave interface clock
clock-names:
items:
- const: dbi
- const: mstr
- const: slv
resets:
items:
- description: DWC PCIe Data Bus Interface (DBI) reset
- description: DWC PCIe application AXI-bus master interface reset
- description: DWC PCIe application AXI-bus slave interface reset
reset-names:
items:
- const: dbi
- const: mstr
- const: slv
interrupts:
items:
- description: Interrupt used for MSIs
interrupt-names:
const: msi
spacemit,apmu:
$ref: /schemas/types.yaml#/definitions/phandle-array
description:
A phandle that refers to the APMU system controller, whose regmap is
used in managing resets and link state, along with and offset of its
reset control register.
items:
- items:
- description: phandle to APMU system controller
- description: register offset
patternProperties:
'^pcie@':
type: object
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
phys:
maxItems: 1
vpcie3v3-supply:
description:
A phandle for 3.3v regulator to use for PCIe
required:
- phys
- vpcie3v3-supply
unevaluatedProperties: false
required:
- clocks
- clock-names
- resets
- reset-names
- interrupts
- interrupt-names
- spacemit,apmu
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/spacemit,k1-syscon.h>
pcie@ca400000 {
device_type = "pci";
compatible = "spacemit,k1-pcie";
reg = <0xca400000 0x00001000>,
<0xca700000 0x0001ff24>,
<0x9f000000 0x00002000>,
<0xc0c20000 0x00001000>;
reg-names = "dbi",
"atu",
"config",
"link";
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x01000000 0x0 0x00000000 0x9f002000 0x0 0x00100000>,
<0x02000000 0x0 0x90000000 0x90000000 0x0 0x0f000000>;
interrupts = <142>;
interrupt-names = "msi";
clocks = <&syscon_apmu CLK_PCIE1_DBI>,
<&syscon_apmu CLK_PCIE1_MASTER>,
<&syscon_apmu CLK_PCIE1_SLAVE>;
clock-names = "dbi",
"mstr",
"slv";
resets = <&syscon_apmu RESET_PCIE1_DBI>,
<&syscon_apmu RESET_PCIE1_MASTER>,
<&syscon_apmu RESET_PCIE1_SLAVE>;
reset-names = "dbi",
"mstr",
"slv";
pinctrl-names = "default";
pinctrl-0 = <&pcie1_3_cfg>;
spacemit,apmu = <&syscon_apmu 0x3d4>;
pcie@0 {
device_type = "pci";
compatible = "pciclass,0604";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
phys = <&pcie1_phy>;
vpcie3v3-supply = <&pcie_vcc_3v3>;
};
};

View File

@@ -0,0 +1,179 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/toshiba,tc9563.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Toshiba TC9563 PCIe switch
maintainers:
- Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com>
description: |
Toshiba TC9563 PCIe switch has one upstream and three downstream ports.
The 3rd downstream port has integrated endpoint device of Ethernet MAC.
Other two downstream ports are supposed to connect to external device.
The TC9563 PCIe switch can be configured through I2C interface before
PCIe link is established to change FTS, ASPM related entry delays,
tx amplitude etc for better power efficiency and functionality.
properties:
compatible:
enum:
- pci1179,0623
reg:
maxItems: 1
resx-gpios:
maxItems: 1
description:
GPIO controlling the RESX# pin.
vdd18-supply: true
vdd09-supply: true
vddc-supply: true
vddio1-supply: true
vddio2-supply: true
vddio18-supply: true
i2c-parent:
$ref: /schemas/types.yaml#/definitions/phandle-array
description:
A phandle to the parent I2C node and the slave address of the device
used to configure tc9563 to change FTS, tx amplitude etc.
items:
- description: Phandle to the I2C controller node
- description: I2C slave address
patternProperties:
"^pcie@[1-3],0$":
description:
child nodes describing the internal downstream ports of
the tc9563 switch.
type: object
allOf:
- $ref: "#/$defs/tc9563-node"
- $ref: /schemas/pci/pci-pci-bridge.yaml#
unevaluatedProperties: false
$defs:
tc9563-node:
type: object
properties:
toshiba,tx-amplitude-microvolt:
description:
Change Tx Margin setting for low power consumption.
toshiba,no-dfe-support:
type: boolean
description:
Disable DFE (Decision Feedback Equalizer), which mitigates
intersymbol interference and some reflections caused by
impedance mismatches.
required:
- resx-gpios
- vdd18-supply
- vdd09-supply
- vddc-supply
- vddio1-supply
- vddio2-supply
- vddio18-supply
- i2c-parent
allOf:
- $ref: "#/$defs/tc9563-node"
- $ref: /schemas/pci/pci-bus-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
pcie {
#address-cells = <3>;
#size-cells = <2>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
bus-range = <0x01 0xff>;
pcie@0,0 {
compatible = "pci1179,0623";
reg = <0x10000 0x0 0x0 0x0 0x0>;
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
ranges;
bus-range = <0x02 0xff>;
i2c-parent = <&qup_i2c 0x77>;
vdd18-supply = <&vdd>;
vdd09-supply = <&vdd>;
vddc-supply = <&vdd>;
vddio1-supply = <&vdd>;
vddio2-supply = <&vdd>;
vddio18-supply = <&vdd>;
resx-gpios = <&gpio 1 GPIO_ACTIVE_LOW>;
pcie@1,0 {
compatible = "pciclass,0604";
reg = <0x20800 0x0 0x0 0x0 0x0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges;
bus-range = <0x03 0xff>;
toshiba,no-dfe-support;
};
pcie@2,0 {
compatible = "pciclass,0604";
reg = <0x21000 0x0 0x0 0x0 0x0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges;
bus-range = <0x04 0xff>;
};
pcie@3,0 {
compatible = "pciclass,0604";
reg = <0x21800 0x0 0x0 0x0 0x0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges;
bus-range = <0x05 0xff>;
toshiba,tx-amplitude-microvolt = <10>;
ethernet@0,0 {
reg = <0x50000 0x0 0x0 0x0 0x0>;
};
ethernet@0,1 {
reg = <0x50100 0x0 0x0 0x0 0x0>;
};
};
};
};
};

View File

@@ -37,6 +37,9 @@ PCI Support Library
.. kernel-doc:: drivers/pci/slot.c
:export:
.. kernel-doc:: drivers/pci/rebar.c
:export:
.. kernel-doc:: drivers/pci/rom.c
:export:

View File

@@ -3164,6 +3164,15 @@ F: arch/arm64/boot/dts/freescale/s32g*.dts*
F: drivers/pinctrl/nxp/
F: drivers/rtc/rtc-s32g.c
ARM/NXP S32G PCIE CONTROLLER DRIVER
M: Ciprian Marian Costea <ciprianmarian.costea@oss.nxp.com>
R: NXP S32 Linux Team <s32@nxp.com>
L: imx@lists.linux.dev
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/pci/nxp,s32g-pcie.yaml
F: drivers/pci/controller/dwc/pcie-nxp-s32g*
ARM/NXP S32G/S32R DWMAC ETHERNET DRIVER
M: Jan Petrous <jan.petrous@oss.nxp.com>
R: s32@nxp.com
@@ -19775,6 +19784,13 @@ S: Orphan
F: Documentation/devicetree/bindings/pci/cdns,*
F: drivers/pci/controller/cadence/*cadence*
PCI DRIVER FOR CIX Sky1
M: Hans Zhang <hans.zhang@cixtech.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/cix,sky1-pcie-*.yaml
F: drivers/pci/controller/cadence/*sky1*
PCI DRIVER FOR FREESCALE LAYERSCAPE
M: Minghuan Lian <minghuan.Lian@nxp.com>
M: Mingkai Hu <mingkai.hu@nxp.com>
@@ -20025,6 +20041,7 @@ F: include/linux/pci-p2pdma.h
PCI POWER CONTROL
M: Bartosz Golaszewski <brgl@kernel.org>
M: Manivannan Sadhasivam <mani@kernel.org>
L: linux-pci@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git
@@ -20161,6 +20178,14 @@ S: Maintained
F: drivers/pci/controller/dwc/pcie-qcom-common.c
F: drivers/pci/controller/dwc/pcie-qcom.c
PCIE DRIVER FOR RENESAS RZ/G3S SERIES
M: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
L: linux-pci@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/renesas,r9a08g045-pcie.yaml
F: drivers/pci/controller/pcie-rzg3s-host.c
PCIE DRIVER FOR ROCKCHIP
M: Shawn Lin <shawn.lin@rock-chips.com>
L: linux-pci@vger.kernel.org

View File

@@ -105,7 +105,6 @@ void adf_dev_restore(struct adf_accel_dev *accel_dev)
accel_dev->accel_id);
hw_device->reset_device(accel_dev);
pci_restore_state(pdev);
pci_save_state(pdev);
}
}
@@ -204,7 +203,6 @@ static pci_ers_result_t adf_slot_reset(struct pci_dev *pdev)
if (!pdev->is_busmaster)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
res = adf_dev_up(accel_dev, false);
if (res && res != -EALREADY)
return PCI_ERS_RESULT_DISCONNECT;

View File

@@ -1286,7 +1286,6 @@ static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_wake_from_d3(pdev, false);
}

View File

@@ -1678,9 +1678,9 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
int rbar_size = pci_rebar_bytes_to_size(adev->gmc.real_vram_size);
struct pci_bus *root;
struct resource *res;
int max_size, r;
unsigned int i;
u16 cmd;
int r;
if (!IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT))
return 0;
@@ -1726,30 +1726,28 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
return 0;
/* Limit the BAR size to what is available */
rbar_size = min(fls(pci_rebar_get_possible_sizes(adev->pdev, 0)) - 1,
rbar_size);
max_size = pci_rebar_get_max_size(adev->pdev, 0);
if (max_size < 0)
return 0;
rbar_size = min(max_size, rbar_size);
/* Disable memory decoding while we change the BAR addresses and size */
pci_read_config_word(adev->pdev, PCI_COMMAND, &cmd);
pci_write_config_word(adev->pdev, PCI_COMMAND,
cmd & ~PCI_COMMAND_MEMORY);
/* Free the VRAM and doorbell BAR, we most likely need to move both. */
/* Tear down doorbell as resizing will release BARs */
amdgpu_doorbell_fini(adev);
if (adev->asic_type >= CHIP_BONAIRE)
pci_release_resource(adev->pdev, 2);
pci_release_resource(adev->pdev, 0);
r = pci_resize_resource(adev->pdev, 0, rbar_size);
r = pci_resize_resource(adev->pdev, 0, rbar_size,
(adev->asic_type >= CHIP_BONAIRE) ? 1 << 5
: 1 << 2);
if (r == -ENOSPC)
dev_info(adev->dev,
"Not enough PCI address space for a large BAR.");
else if (r && r != -ENOTSUPP)
dev_err(adev->dev, "Problem resizing BAR0 (%d).", r);
pci_assign_unassigned_bus_resources(adev->pdev->bus);
/* When the doorbell or fb BAR isn't available we have no chance of
* using the device.
*/

View File

@@ -20,16 +20,6 @@
#include "gt/intel_gt_regs.h"
#ifdef CONFIG_64BIT
static void _release_bars(struct pci_dev *pdev)
{
int resno;
for (resno = PCI_STD_RESOURCES; resno < PCI_STD_RESOURCE_END; resno++) {
if (pci_resource_len(pdev, resno))
pci_release_resource(pdev, resno);
}
}
static void
_resize_bar(struct drm_i915_private *i915, int resno, resource_size_t size)
{
@@ -37,9 +27,7 @@ _resize_bar(struct drm_i915_private *i915, int resno, resource_size_t size)
int bar_size = pci_rebar_bytes_to_size(size);
int ret;
_release_bars(pdev);
ret = pci_resize_resource(pdev, resno, bar_size);
ret = pci_resize_resource(pdev, resno, bar_size, 0);
if (ret) {
drm_info(&i915->drm, "Failed to resize BAR%d to %dM (%pe)\n",
resno, 1 << bar_size, ERR_PTR(ret));
@@ -63,16 +51,12 @@ static void i915_resize_lmem_bar(struct drm_i915_private *i915, resource_size_t
current_size = roundup_pow_of_two(pci_resource_len(pdev, GEN12_LMEM_BAR));
if (i915->params.lmem_bar_size) {
u32 bar_sizes;
rebar_size = i915->params.lmem_bar_size *
(resource_size_t)SZ_1M;
bar_sizes = pci_rebar_get_possible_sizes(pdev, GEN12_LMEM_BAR);
rebar_size = i915->params.lmem_bar_size * (resource_size_t)SZ_1M;
if (rebar_size == current_size)
return;
if (!(bar_sizes & BIT(pci_rebar_bytes_to_size(rebar_size))) ||
if (!pci_rebar_size_supported(pdev, GEN12_LMEM_BAR,
pci_rebar_bytes_to_size(rebar_size)) ||
rebar_size >= roundup_pow_of_two(lmem_size)) {
rebar_size = lmem_size;

View File

@@ -25,39 +25,13 @@
#include "xe_vram.h"
#include "xe_vram_types.h"
#define BAR_SIZE_SHIFT 20
/*
* Release all the BARs that could influence/block LMEMBAR resizing, i.e.
* assigned IORESOURCE_MEM_64 BARs
*/
static void release_bars(struct pci_dev *pdev)
{
struct resource *res;
int i;
pci_dev_for_each_resource(pdev, res, i) {
/* Resource already un-assigned, do not reset it */
if (!res->parent)
continue;
/* No need to release unrelated BARs */
if (!(res->flags & IORESOURCE_MEM_64))
continue;
pci_release_resource(pdev, i);
}
}
static void resize_bar(struct xe_device *xe, int resno, resource_size_t size)
{
struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
int bar_size = pci_rebar_bytes_to_size(size);
int ret;
release_bars(pdev);
ret = pci_resize_resource(pdev, resno, bar_size);
ret = pci_resize_resource(pdev, resno, bar_size, 0);
if (ret) {
drm_info(&xe->drm, "Failed to resize BAR%d to %dM (%pe). Consider enabling 'Resizable BAR' support in your BIOS\n",
resno, 1 << bar_size, ERR_PTR(ret));
@@ -79,41 +53,37 @@ void xe_vram_resize_bar(struct xe_device *xe)
resource_size_t current_size;
resource_size_t rebar_size;
struct resource *root_res;
u32 bar_size_mask;
int max_size, i;
u32 pci_cmd;
int i;
/* gather some relevant info */
current_size = pci_resource_len(pdev, LMEM_BAR);
bar_size_mask = pci_rebar_get_possible_sizes(pdev, LMEM_BAR);
if (!bar_size_mask)
return;
if (force_vram_bar_size < 0)
return;
/* set to a specific size? */
if (force_vram_bar_size) {
u32 bar_size_bit;
rebar_size = pci_rebar_bytes_to_size(force_vram_bar_size *
(resource_size_t)SZ_1M);
rebar_size = force_vram_bar_size * (resource_size_t)SZ_1M;
bar_size_bit = bar_size_mask & BIT(pci_rebar_bytes_to_size(rebar_size));
if (!bar_size_bit) {
if (!pci_rebar_size_supported(pdev, LMEM_BAR, rebar_size)) {
drm_info(&xe->drm,
"Requested size: %lluMiB is not supported by rebar sizes: 0x%x. Leaving default: %lluMiB\n",
(u64)rebar_size >> 20, bar_size_mask, (u64)current_size >> 20);
"Requested size: %lluMiB is not supported by rebar sizes: 0x%llx. Leaving default: %lluMiB\n",
(u64)pci_rebar_size_to_bytes(rebar_size) >> 20,
pci_rebar_get_possible_sizes(pdev, LMEM_BAR),
(u64)current_size >> 20);
return;
}
rebar_size = 1ULL << (__fls(bar_size_bit) + BAR_SIZE_SHIFT);
rebar_size = pci_rebar_size_to_bytes(rebar_size);
if (rebar_size == current_size)
return;
} else {
rebar_size = 1ULL << (__fls(bar_size_mask) + BAR_SIZE_SHIFT);
max_size = pci_rebar_get_max_size(pdev, LMEM_BAR);
if (max_size < 0)
return;
rebar_size = pci_rebar_size_to_bytes(max_size);
/* only resize if larger than current */
if (rebar_size <= current_size)

View File

@@ -6444,7 +6444,6 @@ bnx2_reset_task(struct work_struct *work)
if (!(pcicmd & PCI_COMMAND_MEMORY)) {
/* in case PCI block has reset */
pci_restore_state(bp->pdev);
pci_save_state(bp->pdev);
}
rc = bnx2_init_nic(bp, 1);
if (rc) {
@@ -8718,7 +8717,6 @@ static pci_ers_result_t bnx2_io_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
if (netif_running(dev))
err = bnx2_init_nic(bp, 1);

View File

@@ -14216,7 +14216,6 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
if (netif_running(dev))
bnx2x_set_power_state(bp, PCI_D0);

View File

@@ -18337,7 +18337,6 @@ static pci_ers_result_t tg3_io_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
if (!netdev || !netif_running(netdev)) {
rc = PCI_ERS_RESULT_RECOVERED;

View File

@@ -2933,7 +2933,6 @@ static int t3_reenable_adapter(struct adapter *adapter)
}
pci_set_master(adapter->pdev);
pci_restore_state(adapter->pdev);
pci_save_state(adapter->pdev);
/* Free sge resources */
t3_free_sge_resources(adapter);

View File

@@ -5458,7 +5458,6 @@ static pci_ers_result_t eeh_slot_reset(struct pci_dev *pdev)
if (!adap) {
pci_restore_state(pdev);
pci_save_state(pdev);
return PCI_ERS_RESULT_RECOVERED;
}
@@ -5473,7 +5472,6 @@ static pci_ers_result_t eeh_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
if (t4_wait_dev_ready(adap->regs) < 0)
return PCI_ERS_RESULT_DISCONNECT;

View File

@@ -160,7 +160,6 @@ static pci_ers_result_t hbg_pci_err_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
hbg_err_reset(priv);
return PCI_ERS_RESULT_RECOVERED;

View File

@@ -7195,7 +7195,6 @@ static pci_ers_result_t e1000_io_slot_reset(struct pci_dev *pdev)
"Cannot re-enable PCI device after reset.\n");
result = PCI_ERS_RESULT_DISCONNECT;
} else {
pdev->state_saved = true;
pci_restore_state(pdev);
pci_set_master(pdev);

View File

@@ -2423,12 +2423,6 @@ static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
/* After second error pci->state_saved is false, this
* resets it so EEH doesn't break.
*/
pci_save_state(pdev);
pci_wake_from_d3(pdev, false);
result = PCI_ERS_RESULT_RECOVERED;

View File

@@ -16455,7 +16455,6 @@ static pci_ers_result_t i40e_pci_error_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_wake_from_d3(pdev, false);
reg = rd32(&pf->hw, I40E_GLGEN_RTRIG);

View File

@@ -5653,7 +5653,6 @@ static int ice_resume(struct device *dev)
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_save_state(pdev);
if (!pci_device_is_present(pdev))
return -ENODEV;
@@ -5753,7 +5752,6 @@ static pci_ers_result_t ice_pci_err_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_wake_from_d3(pdev, false);
/* Check for life */

View File

@@ -9599,7 +9599,6 @@ static int __igb_resume(struct device *dev, bool rpm)
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_save_state(pdev);
if (!pci_device_is_present(pdev))
return -ENODEV;
@@ -9754,7 +9753,6 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_enable_wake(pdev, PCI_D3hot, 0);
pci_enable_wake(pdev, PCI_D3cold, 0);

View File

@@ -7530,7 +7530,6 @@ static int __igc_resume(struct device *dev, bool rpm)
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_save_state(pdev);
if (!pci_device_is_present(pdev))
return -ENODEV;
@@ -7667,7 +7666,6 @@ static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev)
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_enable_wake(pdev, PCI_D3hot, 0);
pci_enable_wake(pdev, PCI_D3cold, 0);

View File

@@ -12298,7 +12298,6 @@ static pci_ers_result_t ixgbe_io_slot_reset(struct pci_dev *pdev)
adapter->hw.hw_addr = adapter->io_addr;
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_wake_from_d3(pdev, false);

View File

@@ -4368,7 +4368,6 @@ static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
return PCI_ERS_RESULT_RECOVERED;
}

View File

@@ -2137,7 +2137,6 @@ static pci_ers_result_t mlx5_pci_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
err = wait_vital(pdev);
if (err) {

View File

@@ -581,7 +581,6 @@ static pci_ers_result_t fbnic_err_slot_reset(struct pci_dev *pdev)
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_save_state(pdev);
if (pci_enable_device_mem(pdev)) {
dev_err(&pdev->dev,

View File

@@ -3915,7 +3915,6 @@ static int lan743x_pm_resume(struct device *dev)
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_save_state(pdev);
/* Restore HW_CFG that was saved during pm suspend */
if (adapter->is_pci11x1x)

View File

@@ -3416,10 +3416,6 @@ static void myri10ge_watchdog(struct work_struct *work)
* nic was resumed from power saving mode.
*/
pci_restore_state(mgp->pdev);
/* save state again for accounting reasons */
pci_save_state(mgp->pdev);
} else {
/* if we get back -1's from our slot, perhaps somebody
* powered off our card. Don't try to reset it in

View File

@@ -3425,7 +3425,6 @@ static void s2io_reset(struct s2io_nic *sp)
/* Restore the PCI state saved during initialization. */
pci_restore_state(sp->pdev);
pci_save_state(sp->pdev);
pci_read_config_word(sp->pdev, 0x2, &val16);
if (check_pci_device_id(val16) != (u16)PCI_ANY_ID)
break;

View File

@@ -4,7 +4,7 @@
obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o \
remove.o pci.o pci-driver.o search.o \
rom.o setup-res.o irq.o vpd.o \
rebar.o rom.o setup-res.o irq.o vpd.o \
setup-bus.o vc.o mmap.o devres.o
obj-$(CONFIG_PCI) += msi/

View File

@@ -357,6 +357,9 @@ void pci_bus_add_device(struct pci_dev *dev)
pci_proc_attach_device(dev);
pci_bridge_d3_update(dev);
/* Save config space for error recoverability */
pci_save_state(dev);
/*
* If the PCI device is associated with a pwrctrl device with a
* power supply, create a device link between the PCI device and

View File

@@ -146,7 +146,7 @@ config PCIE_HISI_ERR
config PCI_IXP4XX
bool "Intel IXP4xx PCI controller"
depends on ARM && OF
depends on OF
depends on ARCH_IXP4XX || COMPILE_TEST
default ARCH_IXP4XX
help
@@ -259,12 +259,20 @@ config PCIE_RCAR_EP
config PCI_RCAR_GEN2
bool "Renesas R-Car Gen2 Internal PCI controller"
depends on ARCH_RENESAS || COMPILE_TEST
depends on ARM
depends on (ARCH_RENESAS && ARM) || COMPILE_TEST
help
Say Y here if you want internal PCI support on R-Car Gen2 SoC.
There are 3 internal PCI controllers available with a single
built-in EHCI/OHCI host controller present on each one.
Each internal PCI controller contains a single built-in EHCI/OHCI
host controller.
config PCIE_RENESAS_RZG3S_HOST
bool "Renesas RZ/G3S PCIe host controller"
depends on ARCH_RENESAS || COMPILE_TEST
select MFD_SYSCON
select IRQ_MSI_LIB
help
Say Y here if you want PCIe host controller support on Renesas RZ/G3S
SoC.
config PCIE_ROCKCHIP
bool

View File

@@ -10,6 +10,7 @@ obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o
obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o
obj-$(CONFIG_PCIE_RCAR_HOST) += pcie-rcar.o pcie-rcar-host.o
obj-$(CONFIG_PCIE_RCAR_EP) += pcie-rcar.o pcie-rcar-ep.o
obj-$(CONFIG_PCIE_RENESAS_RZG3S_HOST) += pcie-rzg3s-host.o
obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o
obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o
obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o

View File

@@ -19,10 +19,10 @@ config PCIE_CADENCE_EP
select PCIE_CADENCE
config PCIE_CADENCE_PLAT
bool
tristate
config PCIE_CADENCE_PLAT_HOST
bool "Cadence platform PCIe controller (host mode)"
tristate "Cadence platform PCIe controller (host mode)"
depends on OF
select PCIE_CADENCE_HOST
select PCIE_CADENCE_PLAT
@@ -32,7 +32,7 @@ config PCIE_CADENCE_PLAT_HOST
vendors SoCs.
config PCIE_CADENCE_PLAT_EP
bool "Cadence platform PCIe controller (endpoint mode)"
tristate "Cadence platform PCIe controller (endpoint mode)"
depends on OF
depends on PCI_ENDPOINT
select PCIE_CADENCE_EP
@@ -42,6 +42,21 @@ config PCIE_CADENCE_PLAT_EP
endpoint mode. This PCIe controller may be embedded into many
different vendors SoCs.
config PCI_SKY1_HOST
tristate "CIX SKY1 PCIe controller (host mode)"
depends on OF && (ARCH_CIX || COMPILE_TEST)
select PCIE_CADENCE_HOST
select PCI_ECAM
help
Say Y here if you want to support the CIX SKY1 PCIe platform
controller in host mode. CIX SKY1 PCIe controller uses Cadence
HPA (High Performance Architecture IP [Second generation of
Cadence PCIe IP])
This driver requires Cadence PCIe core infrastructure
(PCIE_CADENCE_HOST) and hardware platform adaptation layer
to function.
config PCIE_SG2042_HOST
tristate "Sophgo SG2042 PCIe controller (host mode)"
depends on OF && (ARCH_SOPHGO || COMPILE_TEST)

View File

@@ -1,7 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
pcie-cadence-mod-y := pcie-cadence-hpa.o pcie-cadence.o
pcie-cadence-host-mod-y := pcie-cadence-host-common.o pcie-cadence-host.o pcie-cadence-host-hpa.o
pcie-cadence-ep-mod-y := pcie-cadence-ep.o
obj-$(CONFIG_PCIE_CADENCE) = pcie-cadence-mod.o
obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host-mod.o
obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep-mod.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
obj-$(CONFIG_PCI_J721E) += pci-j721e.o
obj-$(CONFIG_PCIE_SG2042_HOST) += pcie-sg2042.o
obj-$(CONFIG_PCI_SKY1_HOST) += pci-sky1.o

View File

@@ -477,9 +477,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
struct j721e_pcie *pcie;
struct cdns_pcie_rc *rc = NULL;
struct cdns_pcie_ep *ep = NULL;
struct gpio_desc *gpiod;
void __iomem *base;
struct clk *clk;
u32 num_lanes;
u32 mode;
int ret;
@@ -590,12 +588,12 @@ static int j721e_pcie_probe(struct platform_device *pdev)
switch (mode) {
case PCI_MODE_RC:
gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(gpiod)) {
ret = dev_err_probe(dev, PTR_ERR(gpiod), "Failed to get reset GPIO\n");
pcie->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(pcie->reset_gpio)) {
ret = dev_err_probe(dev, PTR_ERR(pcie->reset_gpio),
"Failed to get reset GPIO\n");
goto err_get_sync;
}
pcie->reset_gpio = gpiod;
ret = cdns_pcie_init_phy(dev, cdns_pcie);
if (ret) {
@@ -603,19 +601,13 @@ static int j721e_pcie_probe(struct platform_device *pdev)
goto err_get_sync;
}
clk = devm_clk_get_optional(dev, "pcie_refclk");
if (IS_ERR(clk)) {
ret = dev_err_probe(dev, PTR_ERR(clk), "failed to get pcie_refclk\n");
pcie->refclk = devm_clk_get_optional_enabled(dev, "pcie_refclk");
if (IS_ERR(pcie->refclk)) {
ret = dev_err_probe(dev, PTR_ERR(pcie->refclk),
"failed to enable pcie_refclk\n");
goto err_pcie_setup;
}
ret = clk_prepare_enable(clk);
if (ret) {
dev_err_probe(dev, ret, "failed to enable pcie_refclk\n");
goto err_pcie_setup;
}
pcie->refclk = clk;
/*
* Section 2.2 of the PCI Express Card Electromechanical
* Specification (Revision 5.1) mandates that the deassertion
@@ -623,16 +615,14 @@ static int j721e_pcie_probe(struct platform_device *pdev)
* This shall ensure that the power and the reference clock
* are stable.
*/
if (gpiod) {
if (pcie->reset_gpio) {
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(gpiod, 1);
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
}
ret = cdns_pcie_host_setup(rc);
if (ret < 0) {
clk_disable_unprepare(pcie->refclk);
if (ret < 0)
goto err_pcie_setup;
}
break;
case PCI_MODE_EP:
@@ -679,7 +669,6 @@ static void j721e_pcie_remove(struct platform_device *pdev)
gpiod_set_value_cansleep(pcie->reset_gpio, 0);
clk_disable_unprepare(pcie->refclk);
cdns_pcie_disable_phy(cdns_pcie);
j721e_pcie_disable_link_irq(pcie);
pm_runtime_put(dev);

View File

@@ -0,0 +1,238 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe controller driver for CIX's sky1 SoCs
*
* Copyright 2025 Cix Technology Group Co., Ltd.
* Author: Hans Zhang <hans.zhang@cixtech.com>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/pci_ids.h>
#include "pcie-cadence.h"
#include "pcie-cadence-host-common.h"
#define PCI_VENDOR_ID_CIX 0x1f6c
#define PCI_DEVICE_ID_CIX_SKY1 0x0001
#define STRAP_REG(n) ((n) * 0x04)
#define STATUS_REG(n) ((n) * 0x04)
#define LINK_TRAINING_ENABLE BIT(0)
#define LINK_COMPLETE BIT(0)
#define SKY1_IP_REG_BANK 0x1000
#define SKY1_IP_CFG_CTRL_REG_BANK 0x4c00
#define SKY1_IP_AXI_MASTER_COMMON 0xf000
#define SKY1_AXI_SLAVE 0x9000
#define SKY1_AXI_MASTER 0xb000
#define SKY1_AXI_HLS_REGISTERS 0xc000
#define SKY1_AXI_RAS_REGISTERS 0xe000
#define SKY1_DTI_REGISTERS 0xd000
#define IP_REG_I_DBG_STS_0 0x420
struct sky1_pcie {
struct cdns_pcie *cdns_pcie;
struct cdns_pcie_rc *cdns_pcie_rc;
struct resource *cfg_res;
struct resource *msg_res;
struct pci_config_window *cfg;
void __iomem *strap_base;
void __iomem *status_base;
void __iomem *reg_base;
void __iomem *cfg_base;
void __iomem *msg_base;
};
static int sky1_pcie_resource_get(struct platform_device *pdev,
struct sky1_pcie *pcie)
{
struct device *dev = &pdev->dev;
struct resource *res;
void __iomem *base;
base = devm_platform_ioremap_resource_byname(pdev, "reg");
if (IS_ERR(base))
return dev_err_probe(dev, PTR_ERR(base),
"unable to find \"reg\" registers\n");
pcie->reg_base = base;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
if (!res)
return dev_err_probe(dev, -ENODEV, "unable to get \"cfg\" resource\n");
pcie->cfg_res = res;
base = devm_platform_ioremap_resource_byname(pdev, "rcsu_strap");
if (IS_ERR(base))
return dev_err_probe(dev, PTR_ERR(base),
"unable to find \"rcsu_strap\" registers\n");
pcie->strap_base = base;
base = devm_platform_ioremap_resource_byname(pdev, "rcsu_status");
if (IS_ERR(base))
return dev_err_probe(dev, PTR_ERR(base),
"unable to find \"rcsu_status\" registers\n");
pcie->status_base = base;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "msg");
if (!res)
return dev_err_probe(dev, -ENODEV, "unable to get \"msg\" resource\n");
pcie->msg_res = res;
pcie->msg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pcie->msg_base)) {
return dev_err_probe(dev, PTR_ERR(pcie->msg_base),
"unable to ioremap msg resource\n");
}
return 0;
}
static int sky1_pcie_start_link(struct cdns_pcie *cdns_pcie)
{
struct sky1_pcie *pcie = dev_get_drvdata(cdns_pcie->dev);
u32 val;
val = readl(pcie->strap_base + STRAP_REG(1));
val |= LINK_TRAINING_ENABLE;
writel(val, pcie->strap_base + STRAP_REG(1));
return 0;
}
static void sky1_pcie_stop_link(struct cdns_pcie *cdns_pcie)
{
struct sky1_pcie *pcie = dev_get_drvdata(cdns_pcie->dev);
u32 val;
val = readl(pcie->strap_base + STRAP_REG(1));
val &= ~LINK_TRAINING_ENABLE;
writel(val, pcie->strap_base + STRAP_REG(1));
}
static bool sky1_pcie_link_up(struct cdns_pcie *cdns_pcie)
{
u32 val;
val = cdns_pcie_hpa_readl(cdns_pcie, REG_BANK_IP_REG,
IP_REG_I_DBG_STS_0);
return val & LINK_COMPLETE;
}
static const struct cdns_pcie_ops sky1_pcie_ops = {
.start_link = sky1_pcie_start_link,
.stop_link = sky1_pcie_stop_link,
.link_up = sky1_pcie_link_up,
};
static int sky1_pcie_probe(struct platform_device *pdev)
{
struct cdns_plat_pcie_of_data *reg_off;
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct cdns_pcie *cdns_pcie;
struct resource_entry *bus;
struct cdns_pcie_rc *rc;
struct sky1_pcie *pcie;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
if (!bridge)
return -ENOMEM;
ret = sky1_pcie_resource_get(pdev, pcie);
if (ret < 0)
return ret;
bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
if (!bus)
return -ENODEV;
pcie->cfg = pci_ecam_create(dev, pcie->cfg_res, bus->res,
&pci_generic_ecam_ops);
if (IS_ERR(pcie->cfg))
return PTR_ERR(pcie->cfg);
bridge->ops = (struct pci_ops *)&pci_generic_ecam_ops.pci_ops;
rc = pci_host_bridge_priv(bridge);
rc->ecam_supported = 1;
rc->cfg_base = pcie->cfg->win;
rc->cfg_res = &pcie->cfg->res;
cdns_pcie = &rc->pcie;
cdns_pcie->dev = dev;
cdns_pcie->ops = &sky1_pcie_ops;
cdns_pcie->reg_base = pcie->reg_base;
cdns_pcie->msg_res = pcie->msg_res;
cdns_pcie->is_rc = 1;
reg_off = devm_kzalloc(dev, sizeof(*reg_off), GFP_KERNEL);
if (!reg_off)
return -ENOMEM;
reg_off->ip_reg_bank_offset = SKY1_IP_REG_BANK;
reg_off->ip_cfg_ctrl_reg_offset = SKY1_IP_CFG_CTRL_REG_BANK;
reg_off->axi_mstr_common_offset = SKY1_IP_AXI_MASTER_COMMON;
reg_off->axi_slave_offset = SKY1_AXI_SLAVE;
reg_off->axi_master_offset = SKY1_AXI_MASTER;
reg_off->axi_hls_offset = SKY1_AXI_HLS_REGISTERS;
reg_off->axi_ras_offset = SKY1_AXI_RAS_REGISTERS;
reg_off->axi_dti_offset = SKY1_DTI_REGISTERS;
cdns_pcie->cdns_pcie_reg_offsets = reg_off;
pcie->cdns_pcie = cdns_pcie;
pcie->cdns_pcie_rc = rc;
pcie->cfg_base = rc->cfg_base;
bridge->sysdata = pcie->cfg;
rc->vendor_id = PCI_VENDOR_ID_CIX;
rc->device_id = PCI_DEVICE_ID_CIX_SKY1;
rc->no_inbound_map = 1;
dev_set_drvdata(dev, pcie);
ret = cdns_pcie_hpa_host_setup(rc);
if (ret < 0) {
pci_ecam_free(pcie->cfg);
return ret;
}
return 0;
}
static const struct of_device_id of_sky1_pcie_match[] = {
{ .compatible = "cix,sky1-pcie-host", },
{},
};
MODULE_DEVICE_TABLE(of, of_sky1_pcie_match);
static void sky1_pcie_remove(struct platform_device *pdev)
{
struct sky1_pcie *pcie = platform_get_drvdata(pdev);
pci_ecam_free(pcie->cfg);
}
static struct platform_driver sky1_pcie_driver = {
.probe = sky1_pcie_probe,
.remove = sky1_pcie_remove,
.driver = {
.name = "sky1-pcie",
.of_match_table = of_sky1_pcie_match,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
module_platform_driver(sky1_pcie_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("PCIe controller driver for CIX's sky1 SoCs");
MODULE_AUTHOR("Hans Zhang <hans.zhang@cixtech.com>");

View File

@@ -0,0 +1,288 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cadence PCIe host controller library.
*
* Copyright (c) 2017 Cadence
* Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
*/
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/list_sort.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/platform_device.h>
#include "pcie-cadence.h"
#include "pcie-cadence-host-common.h"
#define LINK_RETRAIN_TIMEOUT HZ
u64 bar_max_size[] = {
[RP_BAR0] = _ULL(128 * SZ_2G),
[RP_BAR1] = SZ_2G,
[RP_NO_BAR] = _BITULL(63),
};
EXPORT_SYMBOL_GPL(bar_max_size);
int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
{
u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
unsigned long end_jiffies;
u16 lnk_stat;
/* Wait for link training to complete. Exit after timeout. */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
break;
usleep_range(0, 1000);
} while (time_before(jiffies, end_jiffies));
if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
return 0;
return -ETIMEDOUT;
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_training_complete);
int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie,
cdns_pcie_linkup_func pcie_link_up)
{
struct device *dev = pcie->dev;
int retries;
/* Check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (pcie_link_up(pcie)) {
dev_info(dev, "Link up\n");
return 0;
}
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
return -ETIMEDOUT;
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_wait_for_link);
int cdns_pcie_retrain(struct cdns_pcie *pcie,
cdns_pcie_linkup_func pcie_link_up)
{
u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
u16 lnk_stat, lnk_ctl;
int ret = 0;
/*
* Set retrain bit if current speed is 2.5 GB/s,
* but the PCIe root port support is > 2.5 GB/s.
*/
lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
PCI_EXP_LNKCAP));
if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
return ret;
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
lnk_ctl = cdns_pcie_rp_readw(pcie,
pcie_cap_off + PCI_EXP_LNKCTL);
lnk_ctl |= PCI_EXP_LNKCTL_RL;
cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
lnk_ctl);
ret = cdns_pcie_host_training_complete(pcie);
if (ret)
return ret;
ret = cdns_pcie_host_wait_for_link(pcie, pcie_link_up);
}
return ret;
}
EXPORT_SYMBOL_GPL(cdns_pcie_retrain);
int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc,
cdns_pcie_linkup_func pcie_link_up)
{
struct cdns_pcie *pcie = &rc->pcie;
int ret;
ret = cdns_pcie_host_wait_for_link(pcie, pcie_link_up);
/*
* Retrain link for Gen2 training defect
* if quirk flag is set.
*/
if (!ret && rc->quirk_retrain_flag)
ret = cdns_pcie_retrain(pcie, pcie_link_up);
return ret;
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);
enum cdns_pcie_rp_bar
cdns_pcie_host_find_min_bar(struct cdns_pcie_rc *rc, u64 size)
{
enum cdns_pcie_rp_bar bar, sel_bar;
sel_bar = RP_BAR_UNDEFINED;
for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
if (!rc->avail_ib_bar[bar])
continue;
if (size <= bar_max_size[bar]) {
if (sel_bar == RP_BAR_UNDEFINED) {
sel_bar = bar;
continue;
}
if (bar_max_size[bar] < bar_max_size[sel_bar])
sel_bar = bar;
}
}
return sel_bar;
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_find_min_bar);
enum cdns_pcie_rp_bar
cdns_pcie_host_find_max_bar(struct cdns_pcie_rc *rc, u64 size)
{
enum cdns_pcie_rp_bar bar, sel_bar;
sel_bar = RP_BAR_UNDEFINED;
for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
if (!rc->avail_ib_bar[bar])
continue;
if (size >= bar_max_size[bar]) {
if (sel_bar == RP_BAR_UNDEFINED) {
sel_bar = bar;
continue;
}
if (bar_max_size[bar] > bar_max_size[sel_bar])
sel_bar = bar;
}
}
return sel_bar;
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_find_max_bar);
int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
const struct list_head *b)
{
struct resource_entry *entry1, *entry2;
entry1 = container_of(a, struct resource_entry, node);
entry2 = container_of(b, struct resource_entry, node);
return resource_size(entry2->res) - resource_size(entry1->res);
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_dma_ranges_cmp);
int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
struct resource_entry *entry,
cdns_pcie_host_bar_ib_cfg pci_host_ib_config)
{
struct cdns_pcie *pcie = &rc->pcie;
struct device *dev = pcie->dev;
u64 cpu_addr, size, winsize;
enum cdns_pcie_rp_bar bar;
unsigned long flags;
int ret;
cpu_addr = entry->res->start;
flags = entry->res->flags;
size = resource_size(entry->res);
while (size > 0) {
/*
* Try to find a minimum BAR whose size is greater than
* or equal to the remaining resource_entry size. This will
* fail if the size of each of the available BARs is less than
* the remaining resource_entry size.
*
* If a minimum BAR is found, IB ATU will be configured and
* exited.
*/
bar = cdns_pcie_host_find_min_bar(rc, size);
if (bar != RP_BAR_UNDEFINED) {
ret = pci_host_ib_config(rc, bar, cpu_addr, size, flags);
if (ret)
dev_err(dev, "IB BAR: %d config failed\n", bar);
return ret;
}
/*
* If the control reaches here, it would mean the remaining
* resource_entry size cannot be fitted in a single BAR. So we
* find a maximum BAR whose size is less than or equal to the
* remaining resource_entry size and split the resource entry
* so that part of resource entry is fitted inside the maximum
* BAR. The remaining size would be fitted during the next
* iteration of the loop.
*
* If a maximum BAR is not found, there is no way we can fit
* this resource_entry, so we error out.
*/
bar = cdns_pcie_host_find_max_bar(rc, size);
if (bar == RP_BAR_UNDEFINED) {
dev_err(dev, "No free BAR to map cpu_addr %llx\n",
cpu_addr);
return -EINVAL;
}
winsize = bar_max_size[bar];
ret = pci_host_ib_config(rc, bar, cpu_addr, winsize, flags);
if (ret) {
dev_err(dev, "IB BAR: %d config failed\n", bar);
return ret;
}
size -= winsize;
cpu_addr += winsize;
}
return 0;
}
int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc,
cdns_pcie_host_bar_ib_cfg pci_host_ib_config)
{
struct cdns_pcie *pcie = &rc->pcie;
struct device *dev = pcie->dev;
struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
struct resource_entry *entry;
u32 no_bar_nbits = 32;
int err;
bridge = pci_host_bridge_from_priv(rc);
if (!bridge)
return -ENOMEM;
if (list_empty(&bridge->dma_ranges)) {
of_property_read_u32(np, "cdns,no-bar-match-nbits",
&no_bar_nbits);
err = pci_host_ib_config(rc, RP_NO_BAR, 0x0, (u64)1 << no_bar_nbits, 0);
if (err)
dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR);
return err;
}
list_sort(NULL, &bridge->dma_ranges, cdns_pcie_host_dma_ranges_cmp);
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
err = cdns_pcie_host_bar_config(rc, entry, pci_host_ib_config);
if (err) {
dev_err(dev, "Fail to configure IB using dma-ranges\n");
return err;
}
}
return 0;
}
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Cadence PCIe host controller driver");

View File

@@ -0,0 +1,46 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cadence PCIe Host controller driver.
*
* Copyright (c) 2017 Cadence
* Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
*/
#ifndef _PCIE_CADENCE_HOST_COMMON_H
#define _PCIE_CADENCE_HOST_COMMON_H
#include <linux/kernel.h>
#include <linux/pci.h>
extern u64 bar_max_size[];
typedef int (*cdns_pcie_host_bar_ib_cfg)(struct cdns_pcie_rc *,
enum cdns_pcie_rp_bar,
u64,
u64,
unsigned long);
typedef bool (*cdns_pcie_linkup_func)(struct cdns_pcie *);
int cdns_pcie_host_training_complete(struct cdns_pcie *pcie);
int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie,
cdns_pcie_linkup_func pcie_link_up);
int cdns_pcie_retrain(struct cdns_pcie *pcie, cdns_pcie_linkup_func pcie_linkup_func);
int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc,
cdns_pcie_linkup_func pcie_link_up);
enum cdns_pcie_rp_bar
cdns_pcie_host_find_min_bar(struct cdns_pcie_rc *rc, u64 size);
enum cdns_pcie_rp_bar
cdns_pcie_host_find_max_bar(struct cdns_pcie_rc *rc, u64 size);
int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
const struct list_head *b);
int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
enum cdns_pcie_rp_bar bar,
u64 cpu_addr,
u64 size,
unsigned long flags);
int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
struct resource_entry *entry,
cdns_pcie_host_bar_ib_cfg pci_host_ib_config);
int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc,
cdns_pcie_host_bar_ib_cfg pci_host_ib_config);
#endif /* _PCIE_CADENCE_HOST_COMMON_H */

View File

@@ -0,0 +1,368 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cadence PCIe host controller driver.
*
* Copyright (c) 2024, Cadence Design Systems
* Author: Manikandan K Pillai <mpillai@cadence.com>
*/
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/list_sort.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include "pcie-cadence.h"
#include "pcie-cadence-host-common.h"
static u8 bar_aperture_mask[] = {
[RP_BAR0] = 0x3F,
[RP_BAR1] = 0x3F,
};
void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
struct cdns_pcie *pcie = &rc->pcie;
unsigned int busn = bus->number;
u32 addr0, desc0, desc1, ctrl0;
u32 regval;
if (pci_is_root_bus(bus)) {
/*
* Only the root port (devfn == 0) is connected to this bus.
* All other PCI devices are behind some bridge hence on another
* bus.
*/
if (devfn)
return NULL;
return pcie->reg_base + (where & 0xfff);
}
/* Clear AXI link-down status */
regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN,
(regval & ~GENMASK(0, 0)));
/* Update Output registers for AXI region 0 */
addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0);
desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0));
desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK;
desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
if (busn == bridge->busnr + 1)
desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
else
desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0);
return rc->cfg_base + (where & 0xfff);
}
static struct pci_ops cdns_pcie_hpa_host_ops = {
.map_bus = cdns_pci_hpa_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
};
static void cdns_pcie_hpa_host_enable_ptm_response(struct cdns_pcie *pcie)
{
u32 val;
val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL);
cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL,
val | CDNS_PCIE_HPA_LM_PTM_CTRL_PTMRSEN);
}
static int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
enum cdns_pcie_rp_bar bar,
u64 cpu_addr, u64 size,
unsigned long flags)
{
struct cdns_pcie *pcie = &rc->pcie;
u32 addr0, addr1, aperture, value;
if (!rc->avail_ib_bar[bar])
return -ENODEV;
rc->avail_ib_bar[bar] = false;
aperture = ilog2(size);
if (bar == RP_NO_BAR) {
addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) |
(lower_32_bits(cpu_addr) & GENMASK(31, 8));
addr1 = upper_32_bits(cpu_addr);
} else {
addr0 = lower_32_bits(cpu_addr);
addr1 = upper_32_bits(cpu_addr);
}
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER,
CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), addr0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER,
CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), addr1);
if (bar == RP_NO_BAR)
bar = (enum cdns_pcie_rp_bar)BAR_0;
value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG);
value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) |
HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) |
HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) |
HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) |
HPA_LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 7));
if (size + cpu_addr >= SZ_4G) {
value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar);
if ((flags & IORESOURCE_PREFETCH))
value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar);
} else {
value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar);
if ((flags & IORESOURCE_PREFETCH))
value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar);
}
value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture);
cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG, value);
return 0;
}
static int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
u32 value, ctrl;
/*
* Set the root port BAR configuration register:
* - disable both BAR0 and BAR1
* - enable Prefetchable Memory Base and Limit registers in type 1
* config space (64 bits)
* - enable IO Base and Limit registers in type 1 config
* space (32 bits)
*/
ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED;
value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE |
CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS;
cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG,
CDNS_PCIE_HPA_LM_RC_BAR_CFG, value);
if (rc->vendor_id != 0xffff)
cdns_pcie_hpa_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id);
if (rc->device_id != 0xffff)
cdns_pcie_hpa_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id);
cdns_pcie_hpa_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
cdns_pcie_hpa_rp_writeb(pcie, PCI_CLASS_PROG, 0);
cdns_pcie_hpa_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
/* Enable bus mastering */
value = cdns_pcie_hpa_readl(pcie, REG_BANK_RP, PCI_COMMAND);
value |= (PCI_COMMAND_MEMORY | PCI_COMMAND_IO | PCI_COMMAND_MASTER);
cdns_pcie_hpa_writel(pcie, REG_BANK_RP, PCI_COMMAND, value);
return 0;
}
static void cdns_pcie_hpa_create_region_for_cfg(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
struct resource *cfg_res = rc->cfg_res;
struct resource_entry *entry;
u64 cpu_addr = cfg_res->start;
u32 addr0, addr1, desc1;
int busnr = 0;
entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
if (entry)
busnr = entry->res->start;
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_TAG_MANAGEMENT, 0x01000000);
/*
* Reserve region 0 for PCI configure space accesses:
* OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
* cdns_pci_map_bus(), other region registers are set here once for all
*/
desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), 0x0);
/* Type-1 CFG */
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), 0x05000000);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1);
addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
(lower_32_bits(cpu_addr) & GENMASK(31, 8));
addr1 = upper_32_bits(cpu_addr);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), addr0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), addr1);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), 0x06000000);
}
static int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
struct resource_entry *entry;
int r = 0, busnr = 0;
if (!rc->ecam_supported)
cdns_pcie_hpa_create_region_for_cfg(rc);
entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
if (entry)
busnr = entry->res->start;
r++;
if (pcie->msg_res) {
cdns_pcie_hpa_set_outbound_region_for_normal_msg(pcie, busnr, 0, r,
pcie->msg_res->start);
r++;
}
resource_list_for_each_entry(entry, &bridge->windows) {
struct resource *res = entry->res;
u64 pci_addr = res->start - entry->offset;
if (resource_type(res) == IORESOURCE_IO)
cdns_pcie_hpa_set_outbound_region(pcie, busnr, 0, r,
true,
pci_pio_to_address(res->start),
pci_addr,
resource_size(res));
else
cdns_pcie_hpa_set_outbound_region(pcie, busnr, 0, r,
false,
res->start,
pci_addr,
resource_size(res));
r++;
}
if (rc->no_inbound_map)
return 0;
else
return cdns_pcie_host_map_dma_ranges(rc, cdns_pcie_hpa_host_bar_ib_config);
}
static int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc)
{
int err;
err = cdns_pcie_hpa_host_init_root_port(rc);
if (err)
return err;
return cdns_pcie_hpa_host_init_address_translation(rc);
}
int cdns_pcie_hpa_host_link_setup(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
struct device *dev = rc->pcie.dev;
int ret;
if (rc->quirk_detect_quiet_flag)
cdns_pcie_hpa_detect_quiet_min_delay_set(&rc->pcie);
cdns_pcie_hpa_host_enable_ptm_response(pcie);
ret = cdns_pcie_start_link(pcie);
if (ret) {
dev_err(dev, "Failed to start link\n");
return ret;
}
ret = cdns_pcie_host_wait_for_link(pcie, cdns_pcie_hpa_link_up);
if (ret)
dev_dbg(dev, "PCIe link never came up\n");
return ret;
}
EXPORT_SYMBOL_GPL(cdns_pcie_hpa_host_link_setup);
int cdns_pcie_hpa_host_setup(struct cdns_pcie_rc *rc)
{
struct device *dev = rc->pcie.dev;
struct platform_device *pdev = to_platform_device(dev);
struct pci_host_bridge *bridge;
enum cdns_pcie_rp_bar bar;
struct cdns_pcie *pcie;
struct resource *res;
int ret;
bridge = pci_host_bridge_from_priv(rc);
if (!bridge)
return -ENOMEM;
pcie = &rc->pcie;
pcie->is_rc = true;
if (!pcie->reg_base) {
pcie->reg_base = devm_platform_ioremap_resource_byname(pdev, "reg");
if (IS_ERR(pcie->reg_base)) {
dev_err(dev, "missing \"reg\"\n");
return PTR_ERR(pcie->reg_base);
}
}
/* ECAM config space is remapped at glue layer */
if (!rc->cfg_base) {
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
rc->cfg_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(rc->cfg_base))
return PTR_ERR(rc->cfg_base);
rc->cfg_res = res;
}
/* Put EROM Bar aperture to 0 */
cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_EROM, 0x0);
ret = cdns_pcie_hpa_host_link_setup(rc);
if (ret)
return ret;
for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
rc->avail_ib_bar[bar] = true;
ret = cdns_pcie_hpa_host_init(rc);
if (ret)
return ret;
if (!bridge->ops)
bridge->ops = &cdns_pcie_hpa_host_ops;
return pci_host_probe(bridge);
}
EXPORT_SYMBOL_GPL(cdns_pcie_hpa_host_setup);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Cadence PCIe host controller driver");

View File

@@ -12,14 +12,7 @@
#include <linux/platform_device.h>
#include "pcie-cadence.h"
#define LINK_RETRAIN_TIMEOUT HZ
static u64 bar_max_size[] = {
[RP_BAR0] = _ULL(128 * SZ_2G),
[RP_BAR1] = SZ_2G,
[RP_NO_BAR] = _BITULL(63),
};
#include "pcie-cadence-host-common.h"
static u8 bar_aperture_mask[] = {
[RP_BAR0] = 0x1F,
@@ -81,77 +74,6 @@ static struct pci_ops cdns_pcie_host_ops = {
.write = pci_generic_config_write,
};
static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
{
u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
unsigned long end_jiffies;
u16 lnk_stat;
/* Wait for link training to complete. Exit after timeout. */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
break;
usleep_range(0, 1000);
} while (time_before(jiffies, end_jiffies));
if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
return 0;
return -ETIMEDOUT;
}
static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
{
struct device *dev = pcie->dev;
int retries;
/* Check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (cdns_pcie_link_up(pcie)) {
dev_info(dev, "Link up\n");
return 0;
}
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
return -ETIMEDOUT;
}
static int cdns_pcie_retrain(struct cdns_pcie *pcie)
{
u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
u16 lnk_stat, lnk_ctl;
int ret = 0;
/*
* Set retrain bit if current speed is 2.5 GB/s,
* but the PCIe root port support is > 2.5 GB/s.
*/
lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
PCI_EXP_LNKCAP));
if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
return ret;
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
lnk_ctl = cdns_pcie_rp_readw(pcie,
pcie_cap_off + PCI_EXP_LNKCTL);
lnk_ctl |= PCI_EXP_LNKCTL_RL;
cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
lnk_ctl);
ret = cdns_pcie_host_training_complete(pcie);
if (ret)
return ret;
ret = cdns_pcie_host_wait_for_link(pcie);
}
return ret;
}
static void cdns_pcie_host_disable_ptm_response(struct cdns_pcie *pcie)
{
u32 val;
@@ -168,23 +90,6 @@ static void cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie)
cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN);
}
static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
int ret;
ret = cdns_pcie_host_wait_for_link(pcie);
/*
* Retrain link for Gen2 training defect
* if quirk flag is set.
*/
if (!ret && rc->quirk_retrain_flag)
ret = cdns_pcie_retrain(pcie);
return ret;
}
static void cdns_pcie_host_deinit_root_port(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
@@ -245,10 +150,11 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
return 0;
}
static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
enum cdns_pcie_rp_bar bar,
u64 cpu_addr, u64 size,
unsigned long flags)
int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
enum cdns_pcie_rp_bar bar,
u64 cpu_addr,
u64 size,
unsigned long flags)
{
struct cdns_pcie *pcie = &rc->pcie;
u32 addr0, addr1, aperture, value;
@@ -290,137 +196,6 @@ static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
return 0;
}
static enum cdns_pcie_rp_bar
cdns_pcie_host_find_min_bar(struct cdns_pcie_rc *rc, u64 size)
{
enum cdns_pcie_rp_bar bar, sel_bar;
sel_bar = RP_BAR_UNDEFINED;
for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
if (!rc->avail_ib_bar[bar])
continue;
if (size <= bar_max_size[bar]) {
if (sel_bar == RP_BAR_UNDEFINED) {
sel_bar = bar;
continue;
}
if (bar_max_size[bar] < bar_max_size[sel_bar])
sel_bar = bar;
}
}
return sel_bar;
}
static enum cdns_pcie_rp_bar
cdns_pcie_host_find_max_bar(struct cdns_pcie_rc *rc, u64 size)
{
enum cdns_pcie_rp_bar bar, sel_bar;
sel_bar = RP_BAR_UNDEFINED;
for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
if (!rc->avail_ib_bar[bar])
continue;
if (size >= bar_max_size[bar]) {
if (sel_bar == RP_BAR_UNDEFINED) {
sel_bar = bar;
continue;
}
if (bar_max_size[bar] > bar_max_size[sel_bar])
sel_bar = bar;
}
}
return sel_bar;
}
static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
struct resource_entry *entry)
{
u64 cpu_addr, pci_addr, size, winsize;
struct cdns_pcie *pcie = &rc->pcie;
struct device *dev = pcie->dev;
enum cdns_pcie_rp_bar bar;
unsigned long flags;
int ret;
cpu_addr = entry->res->start;
pci_addr = entry->res->start - entry->offset;
flags = entry->res->flags;
size = resource_size(entry->res);
if (entry->offset) {
dev_err(dev, "PCI addr: %llx must be equal to CPU addr: %llx\n",
pci_addr, cpu_addr);
return -EINVAL;
}
while (size > 0) {
/*
* Try to find a minimum BAR whose size is greater than
* or equal to the remaining resource_entry size. This will
* fail if the size of each of the available BARs is less than
* the remaining resource_entry size.
* If a minimum BAR is found, IB ATU will be configured and
* exited.
*/
bar = cdns_pcie_host_find_min_bar(rc, size);
if (bar != RP_BAR_UNDEFINED) {
ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr,
size, flags);
if (ret)
dev_err(dev, "IB BAR: %d config failed\n", bar);
return ret;
}
/*
* If the control reaches here, it would mean the remaining
* resource_entry size cannot be fitted in a single BAR. So we
* find a maximum BAR whose size is less than or equal to the
* remaining resource_entry size and split the resource entry
* so that part of resource entry is fitted inside the maximum
* BAR. The remaining size would be fitted during the next
* iteration of the loop.
* If a maximum BAR is not found, there is no way we can fit
* this resource_entry, so we error out.
*/
bar = cdns_pcie_host_find_max_bar(rc, size);
if (bar == RP_BAR_UNDEFINED) {
dev_err(dev, "No free BAR to map cpu_addr %llx\n",
cpu_addr);
return -EINVAL;
}
winsize = bar_max_size[bar];
ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize,
flags);
if (ret) {
dev_err(dev, "IB BAR: %d config failed\n", bar);
return ret;
}
size -= winsize;
cpu_addr += winsize;
}
return 0;
}
static int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
const struct list_head *b)
{
struct resource_entry *entry1, *entry2;
entry1 = container_of(a, struct resource_entry, node);
entry2 = container_of(b, struct resource_entry, node);
return resource_size(entry2->res) - resource_size(entry1->res);
}
static void cdns_pcie_host_unmap_dma_ranges(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
@@ -447,43 +222,6 @@ static void cdns_pcie_host_unmap_dma_ranges(struct cdns_pcie_rc *rc)
}
}
static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
struct device *dev = pcie->dev;
struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
struct resource_entry *entry;
u32 no_bar_nbits = 32;
int err;
bridge = pci_host_bridge_from_priv(rc);
if (!bridge)
return -ENOMEM;
if (list_empty(&bridge->dma_ranges)) {
of_property_read_u32(np, "cdns,no-bar-match-nbits",
&no_bar_nbits);
err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0,
(u64)1 << no_bar_nbits, 0);
if (err)
dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR);
return err;
}
list_sort(NULL, &bridge->dma_ranges, cdns_pcie_host_dma_ranges_cmp);
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
err = cdns_pcie_host_bar_config(rc, entry);
if (err) {
dev_err(dev, "Fail to configure IB using dma-ranges\n");
return err;
}
}
return 0;
}
static void cdns_pcie_host_deinit_address_translation(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
@@ -561,7 +299,7 @@ static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
r++;
}
return cdns_pcie_host_map_dma_ranges(rc);
return cdns_pcie_host_map_dma_ranges(rc, cdns_pcie_host_bar_ib_config);
}
static void cdns_pcie_host_deinit(struct cdns_pcie_rc *rc)
@@ -607,7 +345,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
return ret;
}
ret = cdns_pcie_host_start_link(rc);
ret = cdns_pcie_host_start_link(rc, cdns_pcie_link_up);
if (ret)
dev_dbg(dev, "PCIe link never came up\n");

View File

@@ -0,0 +1,193 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cadence PCIe controller driver.
*
* Copyright (c) 2024, Cadence Design Systems
* Author: Manikandan K Pillai <mpillai@cadence.com>
*/
#ifndef _PCIE_CADENCE_HPA_REGS_H
#define _PCIE_CADENCE_HPA_REGS_H
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/pci-epf.h>
#include <linux/phy/phy.h>
#include <linux/bitfield.h>
/* High Performance Architecture (HPA) PCIe controller registers */
#define CDNS_PCIE_HPA_IP_REG_BANK 0x01000000
#define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x01003C00
#define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0x02020000
/* Address Translation Registers */
#define CDNS_PCIE_HPA_AXI_SLAVE 0x03000000
#define CDNS_PCIE_HPA_AXI_MASTER 0x03002000
/* Root Port register base address */
#define CDNS_PCIE_HPA_RP_BASE 0x0
#define CDNS_PCIE_HPA_LM_ID 0x1420
/* Endpoint Function BARs */
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \
CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn))
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn))
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + 0x04)
#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \
CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn))
#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + 0x08)
#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + 0x0C)
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) \
(GENMASK(5, 0) << (0x4 + (f) * 10))
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
(((a) << (4 + ((b) * 10))) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b)))
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) \
(GENMASK(3, 0) << ((f) * 10))
#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
(((c) << ((b) * 10)) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)))
/* Endpoint Function Configuration Register */
#define CDNS_PCIE_HPA_LM_EP_FUNC_CFG 0x02C0
/* Root Complex BAR Configuration Register */
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG 0x14
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(9, 4)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK, a)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(3, 0)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(c) \
FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK, c)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(19, 14)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK, a)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(13, 10)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(c) \
FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK, c)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(20)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(21)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE BIT(22)
#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS BIT(23)
/* BAR control values applicable to both Endpoint Function and Root Complex */
#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED 0x0
#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS 0x3
#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS 0x1
#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x9
#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS 0x5
#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0xD
#define HPA_LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
(CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED << ((bar) * 10))
#define HPA_LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
(CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS << ((bar) * 10))
#define HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
(CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS << ((bar) * 10))
#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
(CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << ((bar) * 10))
#define HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
(CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS << ((bar) * 10))
#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
(CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << ((bar) * 10))
#define HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture) \
(((aperture) - 7) << (((bar) * 10) + 4))
#define CDNS_PCIE_HPA_LM_PTM_CTRL 0x0520
#define CDNS_PCIE_HPA_LM_PTM_CTRL_PTMRSEN BIT(17)
/* Root Port Registers PCI config space for root port function */
#define CDNS_PCIE_HPA_RP_CAP_OFFSET 0xC0
/* Region r Outbound AXI to PCIe Address Translation Register 0 */
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r) (0x1010 + ((r) & 0x1F) * 0x0080)
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(23, 16)
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK, devfn)
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(31, 24)
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK, bus)
/* Region r Outbound AXI to PCIe Address Translation Register 1 */
#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r) (0x1014 + ((r) & 0x1F) * 0x0080)
/* Region r Outbound PCIe Descriptor Register */
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r) (0x1008 + ((r) & 0x1F) * 0x0080)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(28, 24)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x0)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x2)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x4)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x5)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x10)
/* Region r Outbound PCIe Descriptor Register */
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r) (0x100C + ((r) & 0x1F) * 0x0080)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK GENMASK(31, 24)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(bus) \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK, bus)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK GENMASK(23, 16)
#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(devfn) \
FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn)
#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r) (0x1018 + ((r) & 0x1F) * 0x0080)
#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS BIT(26)
#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN BIT(25)
/* Region r AXI Region Base Address Register 0 */
#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r) (0x1000 + ((r) & 0x1F) * 0x0080)
#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
/* Region r AXI Region Base Address Register 1 */
#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r) (0x1004 + ((r) & 0x1F) * 0x0080)
/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar) (((bar) * 0x0008))
#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar) (0x04 + ((bar) * 0x0008))
/* AXI link down register */
#define CDNS_PCIE_HPA_AT_LINKDOWN 0x04
/*
* Physical Layer Configuration Register 0
* This register contains the parameters required for functional setup
* of Physical Layer.
*/
#define CDNS_PCIE_HPA_PHY_LAYER_CFG0 0x0400
#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK GENMASK(26, 24)
#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay) \
FIELD_PREP(CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK, delay)
#define CDNS_PCIE_HPA_LINK_TRNG_EN_MASK GENMASK(27, 27)
#define CDNS_PCIE_HPA_PHY_DBG_STS_REG0 0x0420
#define CDNS_PCIE_HPA_RP_MAX_IB 0x3
#define CDNS_PCIE_HPA_MAX_OB 15
/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) (((fn) * 0x0080) + ((bar) * 0x0008))
#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) (0x4 + ((fn) * 0x0080) + ((bar) * 0x0008))
/* Miscellaneous offsets definitions */
#define CDNS_PCIE_HPA_TAG_MANAGEMENT 0x0
#define CDNS_PCIE_HPA_SLAVE_RESP 0x100
#define I_ROOT_PORT_REQ_ID_REG 0x141c
#define LM_HAL_SBSA_CTRL 0x1170
#define I_PCIE_BUS_NUMBERS (CDNS_PCIE_HPA_RP_BASE + 0x18)
#define CDNS_PCIE_EROM 0x18
#endif /* _PCIE_CADENCE_HPA_REGS_H */

View File

@@ -0,0 +1,167 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cadence PCIe controller driver.
*
* Copyright (c) 2024, Cadence Design Systems
* Author: Manikandan K Pillai <mpillai@cadence.com>
*/
#include <linux/kernel.h>
#include <linux/of.h>
#include "pcie-cadence.h"
bool cdns_pcie_hpa_link_up(struct cdns_pcie *pcie)
{
u32 pl_reg_val;
pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_DBG_STS_REG0);
if (pl_reg_val & GENMASK(0, 0))
return true;
return false;
}
EXPORT_SYMBOL_GPL(cdns_pcie_hpa_link_up);
void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
{
u32 delay = 0x3;
u32 ltssm_control_cap;
/* Set the LTSSM Detect Quiet state min. delay to 2ms */
ltssm_control_cap = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG,
CDNS_PCIE_HPA_PHY_LAYER_CFG0);
ltssm_control_cap = ((ltssm_control_cap &
~CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK) |
CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay));
cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG,
CDNS_PCIE_HPA_PHY_LAYER_CFG0, ltssm_control_cap);
}
EXPORT_SYMBOL_GPL(cdns_pcie_hpa_detect_quiet_min_delay_set);
void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
u64 cpu_addr, u64 pci_addr, size_t size)
{
/*
* roundup_pow_of_two() returns an unsigned long, which is not suited
* for 64bit values
*/
u64 sz = 1ULL << fls64(size - 1);
int nbits = ilog2(sz);
u32 addr0, addr1, desc0, desc1, ctrl0;
if (nbits < 8)
nbits = 8;
/* Set the PCI address */
addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
(lower_32_bits(pci_addr) & GENMASK(31, 8));
addr1 = upper_32_bits(pci_addr);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), addr0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), addr1);
/* Set the PCIe header descriptor */
if (is_io)
desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO;
else
desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM;
desc1 = 0;
ctrl0 = 0;
/*
* Whether Bit [26] is set or not inside DESC0 register of the outbound
* PCIe descriptor, the PCI function number must be set into
* Bits [31:24] of DESC1 anyway.
*
* In Root Complex mode, the function number is always 0 but in Endpoint
* mode, the PCIe controller may support more than one function. This
* function number needs to be set properly into the outbound PCIe
* descriptor.
*
* Besides, setting Bit [26] is mandatory when in Root Complex mode:
* then the driver must provide the bus, resp. device, number in
* Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function
* number, the device number is always 0 in Root Complex mode.
*
* However when in Endpoint mode, we can clear Bit [26] of DESC0, hence
* the PCIe controller will use the captured values for the bus and
* device numbers.
*/
if (pcie->is_rc) {
/* The device and function numbers are always 0 */
desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) |
CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
} else {
/*
* Use captured values for bus and device numbers but still
* need to set the function number
*/
desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn);
}
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1);
addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
(lower_32_bits(cpu_addr) & GENMASK(31, 8));
addr1 = upper_32_bits(cpu_addr);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0);
}
EXPORT_SYMBOL_GPL(cdns_pcie_hpa_set_outbound_region);
void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
u8 busnr, u8 fn,
u32 r, u64 cpu_addr)
{
u32 addr0, addr1, desc0, desc1, ctrl0;
desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
desc1 = 0;
ctrl0 = 0;
/* See cdns_pcie_set_outbound_region() comments above */
if (pcie->is_rc) {
desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) |
CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
} else {
desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn);
}
addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
(lower_32_bits(cpu_addr) & GENMASK(31, 8));
addr1 = upper_32_bits(cpu_addr);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1);
cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0);
}
EXPORT_SYMBOL_GPL(cdns_pcie_hpa_set_outbound_region_for_normal_msg);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Cadence PCIe controller driver");

View File

@@ -0,0 +1,230 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cadence PCIe controller driver.
*
* Copyright (c) 2017 Cadence
* Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
*/
#ifndef _PCIE_CADENCE_LGA_REGS_H
#define _PCIE_CADENCE_LGA_REGS_H
#include <linux/bitfield.h>
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
/* Local Management Registers */
#define CDNS_PCIE_LM_BASE 0x00100000
/* Vendor ID Register */
#define CDNS_PCIE_LM_ID (CDNS_PCIE_LM_BASE + 0x0044)
#define CDNS_PCIE_LM_ID_VENDOR_MASK GENMASK(15, 0)
#define CDNS_PCIE_LM_ID_VENDOR_SHIFT 0
#define CDNS_PCIE_LM_ID_VENDOR(vid) \
(((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
#define CDNS_PCIE_LM_ID_SUBSYS_MASK GENMASK(31, 16)
#define CDNS_PCIE_LM_ID_SUBSYS_SHIFT 16
#define CDNS_PCIE_LM_ID_SUBSYS(sub) \
(((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
/* Root Port Requester ID Register */
#define CDNS_PCIE_LM_RP_RID (CDNS_PCIE_LM_BASE + 0x0228)
#define CDNS_PCIE_LM_RP_RID_MASK GENMASK(15, 0)
#define CDNS_PCIE_LM_RP_RID_SHIFT 0
#define CDNS_PCIE_LM_RP_RID_(rid) \
(((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
/* Endpoint Bus and Device Number Register */
#define CDNS_PCIE_LM_EP_ID (CDNS_PCIE_LM_BASE + 0x022C)
#define CDNS_PCIE_LM_EP_ID_DEV_MASK GENMASK(4, 0)
#define CDNS_PCIE_LM_EP_ID_DEV_SHIFT 0
#define CDNS_PCIE_LM_EP_ID_BUS_MASK GENMASK(15, 8)
#define CDNS_PCIE_LM_EP_ID_BUS_SHIFT 8
/* Endpoint Function f BAR b Configuration Registers */
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_4) ? CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
(CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
(CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_4) ? CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn))
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) \
(CDNS_PCIE_LM_BASE + 0x0280 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn) \
(CDNS_PCIE_LM_BASE + 0x0284 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
(GENMASK(4, 0) << ((b) * 8))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
(((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
(GENMASK(7, 5) << ((b) * 8))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
(((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
/* Endpoint Function Configuration Register */
#define CDNS_PCIE_LM_EP_FUNC_CFG (CDNS_PCIE_LM_BASE + 0x02C0)
/* Root Complex BAR Configuration Register */
#define CDNS_PCIE_LM_RC_BAR_CFG (CDNS_PCIE_LM_BASE + 0x0300)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(5, 0)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
(((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(8, 6)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
(((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(13, 9)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
(((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(16, 14)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
(((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(17)
#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS 0
#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(18)
#define CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE BIT(19)
#define CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS 0
#define CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS BIT(20)
#define CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE BIT(31)
/* BAR control values applicable to both Endpoint Function and Root Complex */
#define CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED 0x0
#define CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS 0x1
#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS 0x4
#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x5
#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS 0x6
#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0x7
#define LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_APERTURE(bar, aperture) \
(((aperture) - 2) << ((bar) * 8))
/* PTM Control Register */
#define CDNS_PCIE_LM_PTM_CTRL (CDNS_PCIE_LM_BASE + 0x0DA8)
#define CDNS_PCIE_LM_TPM_CTRL_PTMRSEN BIT(17)
/*
* Endpoint Function Registers (PCI configuration space for endpoint functions)
*/
#define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90
#define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xB0
#define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xC0
#define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200
/* Endpoint PF Registers */
#define CDNS_PCIE_CORE_PF_I_ARI_CAP_AND_CTRL(fn) (0x144 + (fn) * 0x1000)
#define CDNS_PCIE_ARI_CAP_NFN_MASK GENMASK(15, 8)
/* Root Port Registers (PCI configuration space for the root port function) */
#define CDNS_PCIE_RP_BASE 0x00200000
#define CDNS_PCIE_RP_CAP_OFFSET 0xC0
/* Address Translation Registers */
#define CDNS_PCIE_AT_BASE 0x00400000
/* Region r Outbound AXI to PCIe Address Translation Register 0 */
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
(CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1F) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
(((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
(((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
/* Region r Outbound AXI to PCIe Address Translation Register 1 */
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
(CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1F) * 0x0020)
/* Region r Outbound PCIe Descriptor Register 0 */
#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
(CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1F) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(3, 0)
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM 0x2
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO 0x6
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 0xA
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 0xB
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG 0xC
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG 0xD
/* Bit 23 MUST be set in RC mode. */
#define CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23)
#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24)
#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
(((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
/* Region r Outbound PCIe Descriptor Register 1 */
#define CDNS_PCIE_AT_OB_REGION_DESC1(r) \
(CDNS_PCIE_AT_BASE + 0x000C + ((r) & 0x1F) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK GENMASK(7, 0)
#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
/* Region r AXI Region Base Address Register 0 */
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
(CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1F) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
/* Region r AXI Region Base Address Register 1 */
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
(CDNS_PCIE_AT_BASE + 0x001C + ((r) & 0x1F) * 0x0020)
/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
(CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
(CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
/* AXI link down register */
#define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
/* LTSSM Capabilities register */
#define CDNS_PCIE_LTSSM_CONTROL_CAP (CDNS_PCIE_LM_BASE + 0x0054)
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK GENMASK(2, 1)
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
(((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
#define CDNS_PCIE_RP_MAX_IB 0x3
#define CDNS_PCIE_MAX_OB 32
/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
(CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
(CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
/* Normal/Vendor specific message access: offset inside some outbound region */
#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK GENMASK(7, 5)
#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
(((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
#define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
(((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
#define CDNS_PCIE_MSG_NO_DATA BIT(16)
#endif /* _PCIE_CADENCE_LGA_REGS_H */

View File

@@ -22,10 +22,6 @@ struct cdns_plat_pcie {
struct cdns_pcie *pcie;
};
struct cdns_plat_pcie_of_data {
bool is_rc;
};
static const struct of_device_id cdns_plat_pcie_of_match[];
static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr)
@@ -177,4 +173,7 @@ static struct platform_driver cdns_plat_pcie_driver = {
.probe = cdns_plat_pcie_probe,
.shutdown = cdns_plat_pcie_shutdown,
};
builtin_platform_driver(cdns_plat_pcie_driver);
module_platform_driver(cdns_plat_pcie_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Cadence PCIe controller platform driver");

View File

@@ -23,6 +23,17 @@ u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap)
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability);
bool cdns_pcie_linkup(struct cdns_pcie *pcie)
{
u32 pl_reg_val;
pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE);
if (pl_reg_val & GENMASK(0, 0))
return true;
return false;
}
EXPORT_SYMBOL_GPL(cdns_pcie_linkup);
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
{
u32 delay = 0x3;
@@ -293,6 +304,7 @@ const struct dev_pm_ops cdns_pcie_pm_ops = {
NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq,
cdns_pcie_resume_noirq)
};
EXPORT_SYMBOL_GPL(cdns_pcie_pm_ops);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Cadence PCIe controller driver");

View File

@@ -7,211 +7,12 @@
#define _PCIE_CADENCE_H
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci-epf.h>
#include <linux/phy/phy.h>
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
/*
* Local Management Registers
*/
#define CDNS_PCIE_LM_BASE 0x00100000
/* Vendor ID Register */
#define CDNS_PCIE_LM_ID (CDNS_PCIE_LM_BASE + 0x0044)
#define CDNS_PCIE_LM_ID_VENDOR_MASK GENMASK(15, 0)
#define CDNS_PCIE_LM_ID_VENDOR_SHIFT 0
#define CDNS_PCIE_LM_ID_VENDOR(vid) \
(((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
#define CDNS_PCIE_LM_ID_SUBSYS_MASK GENMASK(31, 16)
#define CDNS_PCIE_LM_ID_SUBSYS_SHIFT 16
#define CDNS_PCIE_LM_ID_SUBSYS(sub) \
(((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
/* Root Port Requester ID Register */
#define CDNS_PCIE_LM_RP_RID (CDNS_PCIE_LM_BASE + 0x0228)
#define CDNS_PCIE_LM_RP_RID_MASK GENMASK(15, 0)
#define CDNS_PCIE_LM_RP_RID_SHIFT 0
#define CDNS_PCIE_LM_RP_RID_(rid) \
(((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
/* Endpoint Bus and Device Number Register */
#define CDNS_PCIE_LM_EP_ID (CDNS_PCIE_LM_BASE + 0x022c)
#define CDNS_PCIE_LM_EP_ID_DEV_MASK GENMASK(4, 0)
#define CDNS_PCIE_LM_EP_ID_DEV_SHIFT 0
#define CDNS_PCIE_LM_EP_ID_BUS_MASK GENMASK(15, 8)
#define CDNS_PCIE_LM_EP_ID_BUS_SHIFT 8
/* Endpoint Function f BAR b Configuration Registers */
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_4) ? CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
(CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
(CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_4) ? CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn))
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) \
(CDNS_PCIE_LM_BASE + 0x0280 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn) \
(CDNS_PCIE_LM_BASE + 0x0284 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
(GENMASK(4, 0) << ((b) * 8))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
(((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
(GENMASK(7, 5) << ((b) * 8))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
(((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
/* Endpoint Function Configuration Register */
#define CDNS_PCIE_LM_EP_FUNC_CFG (CDNS_PCIE_LM_BASE + 0x02c0)
/* Root Complex BAR Configuration Register */
#define CDNS_PCIE_LM_RC_BAR_CFG (CDNS_PCIE_LM_BASE + 0x0300)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(5, 0)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
(((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(8, 6)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
(((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(13, 9)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
(((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(16, 14)
#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
(((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(17)
#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS 0
#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(18)
#define CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE BIT(19)
#define CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS 0
#define CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS BIT(20)
#define CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE BIT(31)
/* BAR control values applicable to both Endpoint Function and Root Complex */
#define CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED 0x0
#define CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS 0x1
#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS 0x4
#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x5
#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS 0x6
#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0x7
#define LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
(CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << (((bar) * 8) + 6))
#define LM_RC_BAR_CFG_APERTURE(bar, aperture) \
(((aperture) - 2) << ((bar) * 8))
/* PTM Control Register */
#define CDNS_PCIE_LM_PTM_CTRL (CDNS_PCIE_LM_BASE + 0x0da8)
#define CDNS_PCIE_LM_TPM_CTRL_PTMRSEN BIT(17)
/*
* Endpoint Function Registers (PCI configuration space for endpoint functions)
*/
#define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
/*
* Endpoint PF Registers
*/
#define CDNS_PCIE_CORE_PF_I_ARI_CAP_AND_CTRL(fn) (0x144 + (fn) * 0x1000)
#define CDNS_PCIE_ARI_CAP_NFN_MASK GENMASK(15, 8)
/*
* Root Port Registers (PCI configuration space for the root port function)
*/
#define CDNS_PCIE_RP_BASE 0x00200000
#define CDNS_PCIE_RP_CAP_OFFSET 0xc0
/*
* Address Translation Registers
*/
#define CDNS_PCIE_AT_BASE 0x00400000
/* Region r Outbound AXI to PCIe Address Translation Register 0 */
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
(CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
(((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20)
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
(((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
/* Region r Outbound AXI to PCIe Address Translation Register 1 */
#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
(CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
/* Region r Outbound PCIe Descriptor Register 0 */
#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
(CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(3, 0)
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM 0x2
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO 0x6
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 0xa
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 0xb
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG 0xc
#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG 0xd
/* Bit 23 MUST be set in RC mode. */
#define CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23)
#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24)
#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
(((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
/* Region r Outbound PCIe Descriptor Register 1 */
#define CDNS_PCIE_AT_OB_REGION_DESC1(r) \
(CDNS_PCIE_AT_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK GENMASK(7, 0)
#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
/* Region r AXI Region Base Address Register 0 */
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
(CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
/* Region r AXI Region Base Address Register 1 */
#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
(CDNS_PCIE_AT_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
(CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
(((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
(CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
/* AXI link down register */
#define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
/* LTSSM Capabilities register */
#define CDNS_PCIE_LTSSM_CONTROL_CAP (CDNS_PCIE_LM_BASE + 0x0054)
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK GENMASK(2, 1)
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
(((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
#include "pcie-cadence-lga-regs.h"
#include "pcie-cadence-hpa-regs.h"
enum cdns_pcie_rp_bar {
RP_BAR_UNDEFINED = -1,
@@ -220,42 +21,63 @@ enum cdns_pcie_rp_bar {
RP_NO_BAR
};
#define CDNS_PCIE_RP_MAX_IB 0x3
#define CDNS_PCIE_MAX_OB 32
struct cdns_pcie_rp_ib_bar {
u64 size;
bool free;
};
/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
(CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
(CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
/* Normal/Vendor specific message access: offset inside some outbound region */
#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK GENMASK(7, 5)
#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
(((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
#define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
(((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
#define CDNS_PCIE_MSG_DATA BIT(16)
struct cdns_pcie;
struct cdns_pcie_rc;
enum cdns_pcie_reg_bank {
REG_BANK_RP,
REG_BANK_IP_REG,
REG_BANK_IP_CFG_CTRL_REG,
REG_BANK_AXI_MASTER_COMMON,
REG_BANK_AXI_MASTER,
REG_BANK_AXI_SLAVE,
REG_BANK_AXI_HLS,
REG_BANK_AXI_RAS,
REG_BANK_AXI_DTI,
REG_BANKS_MAX,
};
struct cdns_pcie_ops {
int (*start_link)(struct cdns_pcie *pcie);
void (*stop_link)(struct cdns_pcie *pcie);
bool (*link_up)(struct cdns_pcie *pcie);
int (*start_link)(struct cdns_pcie *pcie);
void (*stop_link)(struct cdns_pcie *pcie);
bool (*link_up)(struct cdns_pcie *pcie);
u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr);
};
/**
* struct cdns_plat_pcie_of_data - Register bank offset for a platform
* @is_rc: controller is a RC
* @ip_reg_bank_offset: ip register bank start offset
* @ip_cfg_ctrl_reg_offset: ip config control register start offset
* @axi_mstr_common_offset: AXI master common register start offset
* @axi_slave_offset: AXI slave start offset
* @axi_master_offset: AXI master start offset
* @axi_hls_offset: AXI HLS offset start
* @axi_ras_offset: AXI RAS offset
* @axi_dti_offset: AXI DTI offset
*/
struct cdns_plat_pcie_of_data {
u32 is_rc:1;
u32 ip_reg_bank_offset;
u32 ip_cfg_ctrl_reg_offset;
u32 axi_mstr_common_offset;
u32 axi_slave_offset;
u32 axi_master_offset;
u32 axi_hls_offset;
u32 axi_ras_offset;
u32 axi_dti_offset;
};
/**
* struct cdns_pcie - private data for Cadence PCIe controller drivers
* @reg_base: IO mapped register base
* @mem_res: start/end offsets in the physical system memory to map PCI accesses
* @msg_res: Region for send message to map PCI accesses
* @dev: PCIe controller
* @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
* @phy_count: number of supported PHY devices
@@ -263,16 +85,19 @@ struct cdns_pcie_ops {
* @link: list of pointers to corresponding device link representations
* @ops: Platform-specific ops to control various inputs from Cadence PCIe
* wrapper
* @cdns_pcie_reg_offsets: Register bank offsets for different SoC
*/
struct cdns_pcie {
void __iomem *reg_base;
struct resource *mem_res;
struct device *dev;
bool is_rc;
int phy_count;
struct phy **phy;
struct device_link **link;
const struct cdns_pcie_ops *ops;
void __iomem *reg_base;
struct resource *mem_res;
struct resource *msg_res;
struct device *dev;
bool is_rc;
int phy_count;
struct phy **phy;
struct device_link **link;
const struct cdns_pcie_ops *ops;
const struct cdns_plat_pcie_of_data *cdns_pcie_reg_offsets;
};
/**
@@ -288,6 +113,8 @@ struct cdns_pcie {
* available
* @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
* @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
* @ecam_supported: Whether the ECAM is supported
* @no_inbound_map: Whether inbound mapping is supported
*/
struct cdns_pcie_rc {
struct cdns_pcie pcie;
@@ -298,6 +125,8 @@ struct cdns_pcie_rc {
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
unsigned int quirk_retrain_flag:1;
unsigned int quirk_detect_quiet_flag:1;
unsigned int ecam_supported:1;
unsigned int no_inbound_map:1;
};
/**
@@ -350,6 +179,43 @@ struct cdns_pcie_ep {
unsigned int quirk_disable_flr:1;
};
static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, enum cdns_pcie_reg_bank bank)
{
u32 offset = 0x0;
switch (bank) {
case REG_BANK_RP:
offset = 0;
break;
case REG_BANK_IP_REG:
offset = pcie->cdns_pcie_reg_offsets->ip_reg_bank_offset;
break;
case REG_BANK_IP_CFG_CTRL_REG:
offset = pcie->cdns_pcie_reg_offsets->ip_cfg_ctrl_reg_offset;
break;
case REG_BANK_AXI_MASTER_COMMON:
offset = pcie->cdns_pcie_reg_offsets->axi_mstr_common_offset;
break;
case REG_BANK_AXI_MASTER:
offset = pcie->cdns_pcie_reg_offsets->axi_master_offset;
break;
case REG_BANK_AXI_SLAVE:
offset = pcie->cdns_pcie_reg_offsets->axi_slave_offset;
break;
case REG_BANK_AXI_HLS:
offset = pcie->cdns_pcie_reg_offsets->axi_hls_offset;
break;
case REG_BANK_AXI_RAS:
offset = pcie->cdns_pcie_reg_offsets->axi_ras_offset;
break;
case REG_BANK_AXI_DTI:
offset = pcie->cdns_pcie_reg_offsets->axi_dti_offset;
break;
default:
break;
}
return offset;
}
/* Register access */
static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
@@ -362,6 +228,27 @@ static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
return readl(pcie->reg_base + reg);
}
static inline void cdns_pcie_hpa_writel(struct cdns_pcie *pcie,
enum cdns_pcie_reg_bank bank,
u32 reg,
u32 value)
{
u32 offset = cdns_reg_bank_to_off(pcie, bank);
reg += offset;
writel(value, pcie->reg_base + reg);
}
static inline u32 cdns_pcie_hpa_readl(struct cdns_pcie *pcie,
enum cdns_pcie_reg_bank bank,
u32 reg)
{
u32 offset = cdns_reg_bank_to_off(pcie, bank);
reg += offset;
return readl(pcie->reg_base + reg);
}
static inline u16 cdns_pcie_readw(struct cdns_pcie *pcie, u32 reg)
{
return readw(pcie->reg_base + reg);
@@ -457,6 +344,29 @@ static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
return cdns_pcie_read_sz(addr, 0x2);
}
static inline void cdns_pcie_hpa_rp_writeb(struct cdns_pcie *pcie,
u32 reg, u8 value)
{
void __iomem *addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
cdns_pcie_write_sz(addr, 0x1, value);
}
static inline void cdns_pcie_hpa_rp_writew(struct cdns_pcie *pcie,
u32 reg, u16 value)
{
void __iomem *addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
cdns_pcie_write_sz(addr, 0x2, value);
}
static inline u16 cdns_pcie_hpa_rp_readw(struct cdns_pcie *pcie, u32 reg)
{
void __iomem *addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
return cdns_pcie_read_sz(addr, 0x2);
}
/* Endpoint Function register access */
static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
u32 reg, u8 value)
@@ -521,6 +431,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc);
void cdns_pcie_host_disable(struct cdns_pcie_rc *rc);
void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
int where);
int cdns_pcie_hpa_host_setup(struct cdns_pcie_rc *rc);
#else
static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
{
@@ -537,6 +448,11 @@ static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
return 0;
}
static inline int cdns_pcie_hpa_host_setup(struct cdns_pcie_rc *rc)
{
return 0;
}
static inline void cdns_pcie_host_disable(struct cdns_pcie_rc *rc)
{
}
@@ -551,6 +467,7 @@ static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int d
#if IS_ENABLED(CONFIG_PCIE_CADENCE_EP)
int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep);
void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep);
int cdns_pcie_hpa_ep_setup(struct cdns_pcie_ep *ep);
#else
static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
{
@@ -560,10 +477,17 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
static inline void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep)
{
}
static inline int cdns_pcie_hpa_ep_setup(struct cdns_pcie_ep *ep)
{
return 0;
}
#endif
u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap);
u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap);
u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap);
u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap);
bool cdns_pcie_linkup(struct cdns_pcie *pcie);
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
@@ -577,8 +501,23 @@ void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
void cdns_pcie_disable_phy(struct cdns_pcie *pcie);
int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
u64 cpu_addr, u64 pci_addr, size_t size);
void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
u8 busnr, u8 fn,
u32 r, u64 cpu_addr);
int cdns_pcie_hpa_host_link_setup(struct cdns_pcie_rc *rc);
void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn,
int where);
int cdns_pcie_hpa_host_start_link(struct cdns_pcie_rc *rc);
int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie);
void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie);
bool cdns_pcie_hpa_link_up(struct cdns_pcie *pcie);
extern const struct dev_pm_ops cdns_pcie_pm_ops;
#endif /* _PCIE_CADENCE_H */

View File

@@ -74,15 +74,12 @@ static int sg2042_pcie_probe(struct platform_device *pdev)
static void sg2042_pcie_remove(struct platform_device *pdev)
{
struct cdns_pcie *pcie = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
struct cdns_pcie_rc *rc;
rc = container_of(pcie, struct cdns_pcie_rc, pcie);
cdns_pcie_host_disable(rc);
cdns_pcie_disable_phy(pcie);
pm_runtime_disable(dev);
}
static int sg2042_pcie_suspend_noirq(struct device *dev)

View File

@@ -256,6 +256,16 @@ config PCIE_TEGRA194_EP
in order to enable device-specific features PCIE_TEGRA194_EP must be
selected. This uses the DesignWare core.
config PCIE_NXP_S32G
bool "NXP S32G PCIe controller (host mode)"
depends on ARCH_S32 || COMPILE_TEST
select PCIE_DW_HOST
help
Enable support for the PCIe controller in NXP S32G based boards to
work in Host mode. The controller is based on DesignWare IP and
can work either as RC or EP. In order to enable host-specific
features PCIE_NXP_S32G must be selected.
config PCIE_DW_PLAT
bool
@@ -416,6 +426,19 @@ config PCIE_SOPHGO_DW
Say Y here if you want PCIe host controller support on
Sophgo SoCs.
config PCIE_SPACEMIT_K1
tristate "SpacemiT K1 PCIe controller (host mode)"
depends on ARCH_SPACEMIT || COMPILE_TEST
depends on HAS_IOMEM
select PCIE_DW_HOST
select PCI_PWRCTRL_SLOT
default ARCH_SPACEMIT
help
Enables support for the DesignWare based PCIe controller in
the SpacemiT K1 SoC operating in host mode. Three controllers
are available on the K1 SoC; the first of these shares a PHY
with a USB 3.0 host controller (one or the other can be used).
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on ARCH_SPEAR13XX || COMPILE_TEST
@@ -482,15 +505,21 @@ config PCI_DRA7XX_EP
to enable device-specific features PCI_DRA7XX_EP must be selected.
This uses the DesignWare core.
# ARM32 platforms use hook_fault_code() and cannot support loadable module.
config PCI_KEYSTONE
bool
# On non-ARM32 platforms, loadable module can be supported.
config PCI_KEYSTONE_TRISTATE
tristate
config PCI_KEYSTONE_HOST
bool "TI Keystone PCIe controller (host mode)"
tristate "TI Keystone PCIe controller (host mode)"
depends on ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST
depends on PCI_MSI
select PCIE_DW_HOST
select PCI_KEYSTONE
select PCI_KEYSTONE if ARM
select PCI_KEYSTONE_TRISTATE if !ARM
help
Enables support for the PCIe controller in the Keystone SoC to
work in host mode. The PCI controller on Keystone is based on
@@ -498,11 +527,12 @@ config PCI_KEYSTONE_HOST
DesignWare core functions to implement the driver.
config PCI_KEYSTONE_EP
bool "TI Keystone PCIe controller (endpoint mode)"
tristate "TI Keystone PCIe controller (endpoint mode)"
depends on ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCI_KEYSTONE
select PCI_KEYSTONE if ARM
select PCI_KEYSTONE_TRISTATE if !ARM
help
Enables support for the PCIe controller in the Keystone SoC to
work in endpoint mode. The PCI controller on Keystone is based

View File

@@ -10,8 +10,12 @@ obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCIE_FU740) += pcie-fu740.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCIE_NXP_S32G) += pcie-nxp-s32g.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
# ARM32 platforms use hook_fault_code() and cannot support loadable module.
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
# On non-ARM32 platforms, loadable module can be supported.
obj-$(CONFIG_PCI_KEYSTONE_TRISTATE) += pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o
obj-$(CONFIG_PCIE_QCOM_COMMON) += pcie-qcom-common.o
@@ -31,6 +35,7 @@ obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o
obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o
obj-$(CONFIG_PCIE_SPACEMIT_K1) += pcie-spacemit-k1.o
obj-$(CONFIG_PCIE_STM32_HOST) += pcie-stm32.o
obj-$(CONFIG_PCIE_STM32_EP) += pcie-stm32-ep.o

View File

@@ -17,6 +17,7 @@
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of.h>
#include <linux/of_irq.h>
@@ -777,29 +778,7 @@ err:
return ret;
}
#ifdef CONFIG_ARM
/*
* When a PCI device does not exist during config cycles, keystone host
* gets a bus error instead of returning 0xffffffff (PCI_ERROR_RESPONSE).
* This handler always returns 0 for this kind of fault.
*/
static int ks_pcie_fault(unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
{
unsigned long instr = *(unsigned long *) instruction_pointer(regs);
if ((instr & 0x0e100090) == 0x00100090) {
int reg = (instr >> 12) & 15;
regs->uregs[reg] = -1;
regs->ARM_pc += 4;
}
return 0;
}
#endif
static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
static int ks_pcie_init_id(struct keystone_pcie *ks_pcie)
{
int ret;
unsigned int id;
@@ -831,7 +810,7 @@ static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
return 0;
}
static int __init ks_pcie_host_init(struct dw_pcie_rp *pp)
static int ks_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
@@ -861,15 +840,6 @@ static int __init ks_pcie_host_init(struct dw_pcie_rp *pp)
if (ret < 0)
return ret;
#ifdef CONFIG_ARM
/*
* PCIe access errors that result into OCP errors are caught by ARM as
* "External aborts"
*/
hook_fault_code(17, ks_pcie_fault, SIGBUS, 0,
"Asynchronous external abort");
#endif
return 0;
}
@@ -1134,6 +1104,7 @@ static const struct of_device_id ks_pcie_of_match[] = {
},
{ },
};
MODULE_DEVICE_TABLE(of, ks_pcie_of_match);
static int ks_pcie_probe(struct platform_device *pdev)
{
@@ -1337,6 +1308,8 @@ static int ks_pcie_probe(struct platform_device *pdev)
break;
default:
dev_err(dev, "INVALID device type %d\n", mode);
ret = -EINVAL;
goto err_get_sync;
}
ks_pcie_enable_error_irq(ks_pcie);
@@ -1379,4 +1352,45 @@ static struct platform_driver ks_pcie_driver = {
.of_match_table = ks_pcie_of_match,
},
};
#ifdef CONFIG_ARM
/*
* When a PCI device does not exist during config cycles, keystone host
* gets a bus error instead of returning 0xffffffff (PCI_ERROR_RESPONSE).
* This handler always returns 0 for this kind of fault.
*/
static int ks_pcie_fault(unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
{
unsigned long instr = *(unsigned long *)instruction_pointer(regs);
if ((instr & 0x0e100090) == 0x00100090) {
int reg = (instr >> 12) & 15;
regs->uregs[reg] = -1;
regs->ARM_pc += 4;
}
return 0;
}
static int __init ks_pcie_init(void)
{
/*
* PCIe access errors that result into OCP errors are caught by ARM as
* "External aborts"
*/
if (of_find_matching_node(NULL, ks_pcie_of_match))
hook_fault_code(17, ks_pcie_fault, SIGBUS, 0,
"Asynchronous external abort");
return platform_driver_register(&ks_pcie_driver);
}
device_initcall(ks_pcie_init);
#else
builtin_platform_driver(ks_pcie_driver);
#endif
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("PCIe controller driver for Texas Instruments Keystone SoCs");
MODULE_AUTHOR("Murali Karicheri <m-karicheri2@ti.com>");

View File

@@ -108,10 +108,22 @@ static int meson_pcie_get_mems(struct platform_device *pdev,
struct meson_pcie *mp)
{
struct dw_pcie *pci = &mp->pci;
struct resource *res;
pci->dbi_base = devm_platform_ioremap_resource_byname(pdev, "elbi");
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/*
* For the broken DTs that supply 'dbi' as 'elbi', parse the 'elbi'
* region and assign it to both 'pci->elbi_base' and 'pci->dbi_space' so
* that the DWC core can skip parsing both regions.
*/
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
if (res) {
pci->elbi_base = devm_pci_remap_cfg_resource(pci->dev, res);
if (IS_ERR(pci->elbi_base))
return PTR_ERR(pci->elbi_base);
pci->dbi_base = pci->elbi_base;
pci->dbi_phys_addr = res->start;
}
mp->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg");
if (IS_ERR(mp->cfg_base))

View File

@@ -797,6 +797,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msix_irq);
/**
* dw_pcie_ep_cleanup - Cleanup DWC EP resources after fundamental reset

View File

@@ -233,6 +233,7 @@ int dw_pcie_allocate_domains(struct dw_pcie_rp *pp)
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_allocate_domains);
void dw_pcie_free_msi(struct dw_pcie_rp *pp)
{
@@ -856,10 +857,19 @@ static void __iomem *dw_pcie_ecam_conf_map_bus(struct pci_bus *bus, unsigned int
return pci->dbi_base + where;
}
static int dw_pcie_op_assert_perst(struct pci_bus *bus, bool assert)
{
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
return dw_pcie_assert_perst(pci, assert);
}
static struct pci_ops dw_pcie_ops = {
.map_bus = dw_pcie_own_conf_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
.assert_perst = dw_pcie_op_assert_perst,
};
static struct pci_ops dw_pcie_ecam_ops = {
@@ -1080,6 +1090,8 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
PCI_COMMAND_MASTER | PCI_COMMAND_SERR;
dw_pcie_writel_dbi(pci, PCI_COMMAND, val);
dw_pcie_hide_unsupported_l1ss(pci);
dw_pcie_config_presets(pp);
/*
* If the platform provides its own child bus config accesses, it means

View File

@@ -168,11 +168,13 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
}
/* ELBI is an optional resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
if (res) {
pci->elbi_base = devm_ioremap_resource(pci->dev, res);
if (IS_ERR(pci->elbi_base))
return PTR_ERR(pci->elbi_base);
if (!pci->elbi_base) {
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
if (res) {
pci->elbi_base = devm_ioremap_resource(pci->dev, res);
if (IS_ERR(pci->elbi_base))
return PTR_ERR(pci->elbi_base);
}
}
/* LLDD is supposed to manually switch the clocks and resets state */
@@ -1081,6 +1083,30 @@ void dw_pcie_edma_remove(struct dw_pcie *pci)
dw_edma_remove(&pci->edma);
}
void dw_pcie_hide_unsupported_l1ss(struct dw_pcie *pci)
{
u16 l1ss;
u32 l1ss_cap;
if (pci->l1ss_support)
return;
l1ss = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
if (!l1ss)
return;
/*
* Unless the driver claims "l1ss_support", don't advertise L1 PM
* Substates because they require CLKREQ# and possibly other
* device-specific configuration.
*/
l1ss_cap = dw_pcie_readl_dbi(pci, l1ss + PCI_L1SS_CAP);
l1ss_cap &= ~(PCI_L1SS_CAP_PCIPM_L1_1 | PCI_L1SS_CAP_ASPM_L1_1 |
PCI_L1SS_CAP_PCIPM_L1_2 | PCI_L1SS_CAP_ASPM_L1_2 |
PCI_L1SS_CAP_L1_PM_SS);
dw_pcie_writel_dbi(pci, l1ss + PCI_L1SS_CAP, l1ss_cap);
}
void dw_pcie_setup(struct dw_pcie *pci)
{
u32 val;

View File

@@ -97,7 +97,7 @@
#define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0)
#define PCIE_PORT_DEBUG0 0x728
#define PORT_LOGIC_LTSSM_STATE_MASK 0x1f
#define PORT_LOGIC_LTSSM_STATE_MASK 0x3f
#define PORT_LOGIC_LTSSM_STATE_L0 0x11
#define PCIE_PORT_DEBUG1 0x72C
#define PCIE_PORT_DEBUG1_LINK_UP BIT(4)
@@ -121,6 +121,7 @@
#define GEN3_RELATED_OFF 0x890
#define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL BIT(0)
#define GEN3_RELATED_OFF_EQ_PHASE_2_3 BIT(9)
#define GEN3_RELATED_OFF_RXEQ_RGRDLESS_RXTS BIT(13)
#define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16)
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24
@@ -138,6 +139,13 @@
#define GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA GENMASK(13, 10)
#define GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA GENMASK(17, 14)
#define COHERENCY_CONTROL_1_OFF 0x8E0
#define CFG_MEMTYPE_BOUNDARY_LOW_ADDR_MASK GENMASK(31, 2)
#define CFG_MEMTYPE_VALUE BIT(0)
#define COHERENCY_CONTROL_2_OFF 0x8E4
#define COHERENCY_CONTROL_3_OFF 0x8E8
#define PCIE_PORT_MULTI_LANE_CTRL 0x8C0
#define PORT_MLTI_UPCFG_SUPPORT BIT(7)
@@ -485,6 +493,7 @@ struct dw_pcie_ops {
enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie);
int (*start_link)(struct dw_pcie *pcie);
void (*stop_link)(struct dw_pcie *pcie);
int (*assert_perst)(struct dw_pcie *pcie, bool assert);
};
struct debugfs_info {
@@ -516,6 +525,7 @@ struct dw_pcie {
int max_link_speed;
u8 n_fts[2];
struct dw_edma_chip edma;
bool l1ss_support; /* L1 PM Substates support */
struct clk_bulk_data app_clks[DW_PCIE_NUM_APP_CLKS];
struct clk_bulk_data core_clks[DW_PCIE_NUM_CORE_CLKS];
struct reset_control_bulk_data app_rsts[DW_PCIE_NUM_APP_RSTS];
@@ -573,6 +583,7 @@ int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 parent_bus_addr,
u8 bar, size_t size);
void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index);
void dw_pcie_hide_unsupported_l1ss(struct dw_pcie *pci);
void dw_pcie_setup(struct dw_pcie *pci);
void dw_pcie_iatu_detect(struct dw_pcie *pci);
int dw_pcie_edma_detect(struct dw_pcie *pci);
@@ -787,6 +798,14 @@ static inline void dw_pcie_stop_link(struct dw_pcie *pci)
pci->ops->stop_link(pci);
}
static inline int dw_pcie_assert_perst(struct dw_pcie *pci, bool assert)
{
if (pci->ops && pci->ops->assert_perst)
return pci->ops->assert_perst(pci, assert);
return 0;
}
static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
{
u32 val;

View File

@@ -62,6 +62,12 @@
/* Interrupt Mask Register Related to Miscellaneous Operation */
#define PCIE_CLIENT_INTR_MASK_MISC 0x24
/* Power Management Control Register */
#define PCIE_CLIENT_POWER_CON 0x2c
#define PCIE_CLKREQ_READY FIELD_PREP_WM16(BIT(0), 1)
#define PCIE_CLKREQ_NOT_READY FIELD_PREP_WM16(BIT(0), 0)
#define PCIE_CLKREQ_PULL_DOWN FIELD_PREP_WM16(GENMASK(13, 12), 1)
/* Hot Reset Control Register */
#define PCIE_CLIENT_HOT_RESET_CTRL 0x180
#define PCIE_LTSSM_APP_DLY2_EN BIT(1)
@@ -82,9 +88,9 @@ struct rockchip_pcie {
unsigned int clk_cnt;
struct reset_control *rst;
struct gpio_desc *rst_gpio;
struct regulator *vpcie3v3;
struct irq_domain *irq_domain;
const struct rockchip_pcie_of_data *data;
bool supports_clkreq;
};
struct rockchip_pcie_of_data {
@@ -200,6 +206,35 @@ static bool rockchip_pcie_link_up(struct dw_pcie *pci)
return FIELD_GET(PCIE_LINKUP_MASK, val) == PCIE_LINKUP;
}
/*
* See e.g. section '11.6.6.4 L1 Substate' in the RK3588 TRM V1.0 for the steps
* needed to support L1 substates. Currently, just enable L1 substates for RC
* mode if CLKREQ# is properly connected and supports-clkreq is present in DT.
* For EP mode, there are more things should be done to actually save power in
* L1 substates, so disable L1 substates until there is proper support.
*/
static void rockchip_pcie_configure_l1ss(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
/* Enable L1 substates if CLKREQ# is properly connected */
if (rockchip->supports_clkreq) {
rockchip_pcie_writel_apb(rockchip, PCIE_CLKREQ_READY,
PCIE_CLIENT_POWER_CON);
pci->l1ss_support = true;
return;
}
/*
* Otherwise, assert CLKREQ# unconditionally. Since
* pci->l1ss_support is not set, the DWC core will prevent L1
* Substates support from being advertised.
*/
rockchip_pcie_writel_apb(rockchip,
PCIE_CLKREQ_PULL_DOWN | PCIE_CLKREQ_NOT_READY,
PCIE_CLIENT_POWER_CON);
}
static void rockchip_pcie_enable_l0s(struct dw_pcie *pci)
{
u32 cap, lnkcap;
@@ -264,6 +299,7 @@ static int rockchip_pcie_host_init(struct dw_pcie_rp *pp)
irq_set_chained_handler_and_data(irq, rockchip_pcie_intx_handler,
rockchip);
rockchip_pcie_configure_l1ss(pci);
rockchip_pcie_enable_l0s(pci);
return 0;
@@ -412,6 +448,9 @@ static int rockchip_pcie_resource_get(struct platform_device *pdev,
return dev_err_probe(&pdev->dev, PTR_ERR(rockchip->rst),
"failed to get reset lines\n");
rockchip->supports_clkreq = of_property_read_bool(pdev->dev.of_node,
"supports-clkreq");
return 0;
}
@@ -652,22 +691,15 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
return ret;
/* DON'T MOVE ME: must be enable before PHY init */
rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3");
if (IS_ERR(rockchip->vpcie3v3)) {
if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV)
return dev_err_probe(dev, PTR_ERR(rockchip->vpcie3v3),
"failed to get vpcie3v3 regulator\n");
rockchip->vpcie3v3 = NULL;
} else {
ret = regulator_enable(rockchip->vpcie3v3);
if (ret)
return dev_err_probe(dev, ret,
"failed to enable vpcie3v3 regulator\n");
}
ret = devm_regulator_get_enable_optional(dev, "vpcie3v3");
if (ret < 0 && ret != -ENODEV)
return dev_err_probe(dev, ret,
"failed to enable vpcie3v3 regulator\n");
ret = rockchip_pcie_phy_init(rockchip);
if (ret)
goto disable_regulator;
return dev_err_probe(dev, ret,
"failed to initialize the phy\n");
ret = reset_control_deassert(rockchip->rst);
if (ret)
@@ -700,9 +732,6 @@ deinit_clk:
clk_bulk_disable_unprepare(rockchip->clk_cnt, rockchip->clks);
deinit_phy:
rockchip_pcie_phy_deinit(rockchip);
disable_regulator:
if (rockchip->vpcie3v3)
regulator_disable(rockchip->vpcie3v3);
return ret;
}

View File

@@ -0,0 +1,406 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for NXP S32G SoCs
*
* Copyright 2019-2025 NXP
*/
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of_address.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/sizes.h>
#include <linux/types.h>
#include "pcie-designware.h"
/* PCIe controller Sub-System */
/* PCIe controller 0 General Control 1 */
#define PCIE_S32G_PE0_GEN_CTRL_1 0x50
#define DEVICE_TYPE_MASK GENMASK(3, 0)
#define SRIS_MODE BIT(8)
/* PCIe controller 0 General Control 3 */
#define PCIE_S32G_PE0_GEN_CTRL_3 0x58
#define LTSSM_EN BIT(0)
/* PCIe Controller 0 Interrupt Status */
#define PCIE_S32G_PE0_INT_STS 0xE8
#define HP_INT_STS BIT(6)
/* Boundary between peripheral space and physical memory space */
#define S32G_MEMORY_BOUNDARY_ADDR 0x80000000
struct s32g_pcie_port {
struct list_head list;
struct phy *phy;
};
struct s32g_pcie {
struct dw_pcie pci;
void __iomem *ctrl_base;
struct list_head ports;
};
#define to_s32g_from_dw_pcie(x) \
container_of(x, struct s32g_pcie, pci)
static void s32g_pcie_writel_ctrl(struct s32g_pcie *s32g_pp, u32 reg, u32 val)
{
writel(val, s32g_pp->ctrl_base + reg);
}
static u32 s32g_pcie_readl_ctrl(struct s32g_pcie *s32g_pp, u32 reg)
{
return readl(s32g_pp->ctrl_base + reg);
}
static void s32g_pcie_enable_ltssm(struct s32g_pcie *s32g_pp)
{
u32 reg;
reg = s32g_pcie_readl_ctrl(s32g_pp, PCIE_S32G_PE0_GEN_CTRL_3);
reg |= LTSSM_EN;
s32g_pcie_writel_ctrl(s32g_pp, PCIE_S32G_PE0_GEN_CTRL_3, reg);
}
static void s32g_pcie_disable_ltssm(struct s32g_pcie *s32g_pp)
{
u32 reg;
reg = s32g_pcie_readl_ctrl(s32g_pp, PCIE_S32G_PE0_GEN_CTRL_3);
reg &= ~LTSSM_EN;
s32g_pcie_writel_ctrl(s32g_pp, PCIE_S32G_PE0_GEN_CTRL_3, reg);
}
static int s32g_pcie_start_link(struct dw_pcie *pci)
{
struct s32g_pcie *s32g_pp = to_s32g_from_dw_pcie(pci);
s32g_pcie_enable_ltssm(s32g_pp);
return 0;
}
static void s32g_pcie_stop_link(struct dw_pcie *pci)
{
struct s32g_pcie *s32g_pp = to_s32g_from_dw_pcie(pci);
s32g_pcie_disable_ltssm(s32g_pp);
}
static struct dw_pcie_ops s32g_pcie_ops = {
.start_link = s32g_pcie_start_link,
.stop_link = s32g_pcie_stop_link,
};
/* Configure the AMBA AXI Coherency Extensions (ACE) interface */
static void s32g_pcie_reset_mstr_ace(struct dw_pcie *pci)
{
u32 ddr_base_low = lower_32_bits(S32G_MEMORY_BOUNDARY_ADDR);
u32 ddr_base_high = upper_32_bits(S32G_MEMORY_BOUNDARY_ADDR);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, COHERENCY_CONTROL_3_OFF, 0x0);
/*
* Ncore is a cache-coherent interconnect module that enables the
* integration of heterogeneous coherent and non-coherent agents in
* the chip. Ncore transactions to peripheral should be non-coherent
* or it might drop them.
*
* One example where this is needed are PCIe MSIs, which use NoSnoop=0
* and might end up routed to Ncore. PCIe coherent traffic (e.g. MSIs)
* that targets peripheral space will be dropped by Ncore because
* peripherals on S32G are not coherent as slaves. We add a hard
* boundary in the PCIe controller coherency control registers to
* separate physical memory space from peripheral space.
*
* Define the start of DDR as seen by Linux as this boundary between
* "memory" and "peripherals", with peripherals being below.
*/
dw_pcie_writel_dbi(pci, COHERENCY_CONTROL_1_OFF,
(ddr_base_low & CFG_MEMTYPE_BOUNDARY_LOW_ADDR_MASK));
dw_pcie_writel_dbi(pci, COHERENCY_CONTROL_2_OFF, ddr_base_high);
dw_pcie_dbi_ro_wr_dis(pci);
}
static int s32g_init_pcie_controller(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct s32g_pcie *s32g_pp = to_s32g_from_dw_pcie(pci);
u32 val;
/* Set RP mode */
val = s32g_pcie_readl_ctrl(s32g_pp, PCIE_S32G_PE0_GEN_CTRL_1);
val &= ~DEVICE_TYPE_MASK;
val |= FIELD_PREP(DEVICE_TYPE_MASK, PCI_EXP_TYPE_ROOT_PORT);
/* Use default CRNS */
val &= ~SRIS_MODE;
s32g_pcie_writel_ctrl(s32g_pp, PCIE_S32G_PE0_GEN_CTRL_1, val);
/*
* Make sure we use the coherency defaults (just in case the settings
* have been changed from their reset values)
*/
s32g_pcie_reset_mstr_ace(pci);
dw_pcie_dbi_ro_wr_en(pci);
val = dw_pcie_readl_dbi(pci, PCIE_PORT_FORCE);
val |= PORT_FORCE_DO_DESKEW_FOR_SRIS;
dw_pcie_writel_dbi(pci, PCIE_PORT_FORCE, val);
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val |= GEN3_RELATED_OFF_EQ_PHASE_2_3;
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
static const struct dw_pcie_host_ops s32g_pcie_host_ops = {
.init = s32g_init_pcie_controller,
};
static int s32g_init_pcie_phy(struct s32g_pcie *s32g_pp)
{
struct dw_pcie *pci = &s32g_pp->pci;
struct device *dev = pci->dev;
struct s32g_pcie_port *port, *tmp;
int ret;
list_for_each_entry(port, &s32g_pp->ports, list) {
ret = phy_init(port->phy);
if (ret) {
dev_err(dev, "Failed to init serdes PHY\n");
goto err_phy_revert;
}
ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, 0);
if (ret) {
dev_err(dev, "Failed to set mode on serdes PHY\n");
goto err_phy_exit;
}
ret = phy_power_on(port->phy);
if (ret) {
dev_err(dev, "Failed to power on serdes PHY\n");
goto err_phy_exit;
}
}
return 0;
err_phy_exit:
phy_exit(port->phy);
err_phy_revert:
list_for_each_entry_continue_reverse(port, &s32g_pp->ports, list) {
phy_power_off(port->phy);
phy_exit(port->phy);
}
list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list)
list_del(&port->list);
return ret;
}
static void s32g_deinit_pcie_phy(struct s32g_pcie *s32g_pp)
{
struct s32g_pcie_port *port, *tmp;
list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list) {
phy_power_off(port->phy);
phy_exit(port->phy);
list_del(&port->list);
}
}
static int s32g_pcie_init(struct device *dev, struct s32g_pcie *s32g_pp)
{
s32g_pcie_disable_ltssm(s32g_pp);
return s32g_init_pcie_phy(s32g_pp);
}
static void s32g_pcie_deinit(struct s32g_pcie *s32g_pp)
{
s32g_pcie_disable_ltssm(s32g_pp);
s32g_deinit_pcie_phy(s32g_pp);
}
static int s32g_pcie_parse_port(struct s32g_pcie *s32g_pp, struct device_node *node)
{
struct device *dev = s32g_pp->pci.dev;
struct s32g_pcie_port *port;
int num_lanes;
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
if (!port)
return -ENOMEM;
port->phy = devm_of_phy_get(dev, node, NULL);
if (IS_ERR(port->phy))
return dev_err_probe(dev, PTR_ERR(port->phy),
"Failed to get serdes PHY\n");
INIT_LIST_HEAD(&port->list);
list_add_tail(&port->list, &s32g_pp->ports);
/*
* The DWC core initialization code cannot yet parse the num-lanes
* attribute in the Root Port node. The S32G only supports one Root
* Port for now so its driver can parse the node and set the num_lanes
* field of struct dwc_pcie before calling dw_pcie_host_init().
*/
if (!of_property_read_u32(node, "num-lanes", &num_lanes))
s32g_pp->pci.num_lanes = num_lanes;
return 0;
}
static int s32g_pcie_parse_ports(struct device *dev, struct s32g_pcie *s32g_pp)
{
struct s32g_pcie_port *port, *tmp;
int ret = -ENOENT;
for_each_available_child_of_node_scoped(dev->of_node, of_port) {
if (!of_node_is_type(of_port, "pci"))
continue;
ret = s32g_pcie_parse_port(s32g_pp, of_port);
if (ret)
goto err_port;
}
err_port:
list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list)
list_del(&port->list);
return ret;
}
static int s32g_pcie_get_resources(struct platform_device *pdev,
struct s32g_pcie *s32g_pp)
{
struct dw_pcie *pci = &s32g_pp->pci;
struct device *dev = &pdev->dev;
int ret;
pci->dev = dev;
pci->ops = &s32g_pcie_ops;
s32g_pp->ctrl_base = devm_platform_ioremap_resource_byname(pdev, "ctrl");
if (IS_ERR(s32g_pp->ctrl_base))
return PTR_ERR(s32g_pp->ctrl_base);
INIT_LIST_HEAD(&s32g_pp->ports);
ret = s32g_pcie_parse_ports(dev, s32g_pp);
if (ret)
return dev_err_probe(dev, ret,
"Failed to parse Root Port: %d\n", ret);
platform_set_drvdata(pdev, s32g_pp);
return 0;
}
static int s32g_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct s32g_pcie *s32g_pp;
struct dw_pcie_rp *pp;
int ret;
s32g_pp = devm_kzalloc(dev, sizeof(*s32g_pp), GFP_KERNEL);
if (!s32g_pp)
return -ENOMEM;
ret = s32g_pcie_get_resources(pdev, s32g_pp);
if (ret)
return ret;
pm_runtime_no_callbacks(dev);
devm_pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0)
goto err_pm_runtime_put;
ret = s32g_pcie_init(dev, s32g_pp);
if (ret)
goto err_pm_runtime_put;
pp = &s32g_pp->pci.pp;
pp->ops = &s32g_pcie_host_ops;
pp->use_atu_msg = true;
ret = dw_pcie_host_init(pp);
if (ret)
goto err_pcie_deinit;
return 0;
err_pcie_deinit:
s32g_pcie_deinit(s32g_pp);
err_pm_runtime_put:
pm_runtime_put(dev);
return ret;
}
static int s32g_pcie_suspend_noirq(struct device *dev)
{
struct s32g_pcie *s32g_pp = dev_get_drvdata(dev);
struct dw_pcie *pci = &s32g_pp->pci;
return dw_pcie_suspend_noirq(pci);
}
static int s32g_pcie_resume_noirq(struct device *dev)
{
struct s32g_pcie *s32g_pp = dev_get_drvdata(dev);
struct dw_pcie *pci = &s32g_pp->pci;
return dw_pcie_resume_noirq(pci);
}
static const struct dev_pm_ops s32g_pcie_pm_ops = {
NOIRQ_SYSTEM_SLEEP_PM_OPS(s32g_pcie_suspend_noirq,
s32g_pcie_resume_noirq)
};
static const struct of_device_id s32g_pcie_of_match[] = {
{ .compatible = "nxp,s32g2-pcie" },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, s32g_pcie_of_match);
static struct platform_driver s32g_pcie_driver = {
.driver = {
.name = "s32g-pcie",
.of_match_table = s32g_pcie_of_match,
.suppress_bind_attrs = true,
.pm = pm_sleep_ptr(&s32g_pcie_pm_ops),
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = s32g_pcie_probe,
};
builtin_platform_driver(s32g_pcie_driver);
MODULE_AUTHOR("Ionut Vicovan <Ionut.Vicovan@nxp.com>");
MODULE_DESCRIPTION("NXP S32G PCIe Host controller driver");
MODULE_LICENSE("GPL");

View File

@@ -641,6 +641,18 @@ static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie)
return 0;
}
static int qcom_pcie_assert_perst(struct dw_pcie *pci, bool assert)
{
struct qcom_pcie *pcie = to_qcom_pcie(pci);
if (assert)
qcom_ep_reset_assert(pcie);
else
qcom_ep_reset_deassert(pcie);
return 0;
}
static void qcom_pcie_2_3_2_ltssm_enable(struct qcom_pcie *pcie)
{
u32 val;
@@ -1012,6 +1024,8 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
val &= ~REQ_NOT_ENTR_L1;
writel(val, pcie->parf + PARF_PM_CTRL);
pci->l1ss_support = true;
val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2);
val |= EN;
writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2);
@@ -1480,6 +1494,7 @@ static const struct qcom_pcie_cfg cfg_fw_managed = {
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = qcom_pcie_link_up,
.start_link = qcom_pcie_start_link,
.assert_perst = qcom_pcie_assert_perst,
};
static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
@@ -1529,6 +1544,7 @@ static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
{
u32 offset, status, width, speed;
struct dw_pcie *pci = pcie->pci;
struct dev_pm_opp_key key = {};
unsigned long freq_kbps;
struct dev_pm_opp *opp;
int ret, freq_mbps;
@@ -1556,8 +1572,20 @@ static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
return;
freq_kbps = freq_mbps * KILO;
opp = dev_pm_opp_find_freq_exact(pci->dev, freq_kbps * width,
true);
opp = dev_pm_opp_find_level_exact(pci->dev, speed);
if (IS_ERR(opp)) {
/* opp-level is not defined use only frequency */
opp = dev_pm_opp_find_freq_exact(pci->dev, freq_kbps * width,
true);
} else {
/* put opp-level OPP */
dev_pm_opp_put(opp);
key.freq = freq_kbps * width;
key.level = speed;
key.bw = 0;
opp = dev_pm_opp_find_key_exact(pci->dev, &key, true);
}
if (!IS_ERR(opp)) {
ret = dev_pm_opp_set_opp(pci->dev, opp);
if (ret)

View File

@@ -0,0 +1,357 @@
// SPDX-License-Identifier: GPL-2.0
/*
* SpacemiT K1 PCIe host driver
*
* Copyright (C) 2025 by RISCstar Solutions Corporation. All rights reserved.
* Copyright (c) 2023, spacemit Corporation.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/gfp.h>
#include <linux/mfd/syscon.h>
#include <linux/mod_devicetable.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include <linux/types.h>
#include "pcie-designware.h"
#define PCI_VENDOR_ID_SPACEMIT 0x201f
#define PCI_DEVICE_ID_SPACEMIT_K1 0x0001
/* Offsets and field definitions for link management registers */
#define K1_PHY_AHB_IRQ_EN 0x0000
#define PCIE_INTERRUPT_EN BIT(0)
#define K1_PHY_AHB_LINK_STS 0x0004
#define SMLH_LINK_UP BIT(1)
#define RDLH_LINK_UP BIT(12)
#define INTR_ENABLE 0x0014
#define MSI_CTRL_INT BIT(11)
/* Some controls require APMU regmap access */
#define SYSCON_APMU "spacemit,apmu"
/* Offsets and field definitions for APMU registers */
#define PCIE_CLK_RESET_CONTROL 0x0000
#define LTSSM_EN BIT(6)
#define PCIE_AUX_PWR_DET BIT(9)
#define PCIE_RC_PERST BIT(12) /* 1: assert PERST# */
#define APP_HOLD_PHY_RST BIT(30)
#define DEVICE_TYPE_RC BIT(31) /* 0: endpoint; 1: RC */
#define PCIE_CONTROL_LOGIC 0x0004
#define PCIE_SOFT_RESET BIT(0)
struct k1_pcie {
struct dw_pcie pci;
struct phy *phy;
void __iomem *link;
struct regmap *pmu; /* Errors ignored; MMIO-backed regmap */
u32 pmu_off;
};
#define to_k1_pcie(dw_pcie) \
platform_get_drvdata(to_platform_device((dw_pcie)->dev))
static void k1_pcie_toggle_soft_reset(struct k1_pcie *k1)
{
u32 offset;
u32 val;
/*
* Write, then read back to guarantee it has reached the device
* before we start the delay.
*/
offset = k1->pmu_off + PCIE_CONTROL_LOGIC;
regmap_set_bits(k1->pmu, offset, PCIE_SOFT_RESET);
regmap_read(k1->pmu, offset, &val);
mdelay(2);
regmap_clear_bits(k1->pmu, offset, PCIE_SOFT_RESET);
}
/* Enable app clocks, deassert resets */
static int k1_pcie_enable_resources(struct k1_pcie *k1)
{
struct dw_pcie *pci = &k1->pci;
int ret;
ret = clk_bulk_prepare_enable(ARRAY_SIZE(pci->app_clks), pci->app_clks);
if (ret)
return ret;
ret = reset_control_bulk_deassert(ARRAY_SIZE(pci->app_rsts),
pci->app_rsts);
if (ret)
goto err_disable_clks;
return 0;
err_disable_clks:
clk_bulk_disable_unprepare(ARRAY_SIZE(pci->app_clks), pci->app_clks);
return ret;
}
/* Assert resets, disable app clocks */
static void k1_pcie_disable_resources(struct k1_pcie *k1)
{
struct dw_pcie *pci = &k1->pci;
reset_control_bulk_assert(ARRAY_SIZE(pci->app_rsts), pci->app_rsts);
clk_bulk_disable_unprepare(ARRAY_SIZE(pci->app_clks), pci->app_clks);
}
/* FIXME: Disable ASPM L1 to avoid errors reported on some NVMe drives */
static void k1_pcie_disable_aspm_l1(struct k1_pcie *k1)
{
struct dw_pcie *pci = &k1->pci;
u8 offset;
u32 val;
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
offset += PCI_EXP_LNKCAP;
dw_pcie_dbi_ro_wr_en(pci);
val = dw_pcie_readl_dbi(pci, offset);
val &= ~PCI_EXP_LNKCAP_ASPM_L1;
dw_pcie_writel_dbi(pci, offset, val);
dw_pcie_dbi_ro_wr_dis(pci);
}
static int k1_pcie_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct k1_pcie *k1 = to_k1_pcie(pci);
u32 reset_ctrl;
u32 val;
int ret;
k1_pcie_toggle_soft_reset(k1);
ret = k1_pcie_enable_resources(k1);
if (ret)
return ret;
/* Set the PCI vendor and device ID */
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, PCI_VENDOR_ID_SPACEMIT);
dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, PCI_DEVICE_ID_SPACEMIT_K1);
dw_pcie_dbi_ro_wr_dis(pci);
/*
* Start by asserting fundamental reset (drive PERST# low). The
* PCI CEM spec says that PERST# should be deasserted at least
* 100ms after the power becomes stable, so we'll insert that
* delay first. Write, then read it back to guarantee the write
* reaches the device before we start the delay.
*/
reset_ctrl = k1->pmu_off + PCIE_CLK_RESET_CONTROL;
regmap_set_bits(k1->pmu, reset_ctrl, PCIE_RC_PERST);
regmap_read(k1->pmu, reset_ctrl, &val);
mdelay(PCIE_T_PVPERL_MS);
/*
* Put the controller in root complex mode, and indicate that
* Vaux (3.3v) is present.
*/
regmap_set_bits(k1->pmu, reset_ctrl, DEVICE_TYPE_RC | PCIE_AUX_PWR_DET);
ret = phy_init(k1->phy);
if (ret) {
k1_pcie_disable_resources(k1);
return ret;
}
/* Deassert fundamental reset (drive PERST# high) */
regmap_clear_bits(k1->pmu, reset_ctrl, PCIE_RC_PERST);
/* Finally, as a workaround, disable ASPM L1 */
k1_pcie_disable_aspm_l1(k1);
return 0;
}
static void k1_pcie_deinit(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct k1_pcie *k1 = to_k1_pcie(pci);
/* Assert fundamental reset (drive PERST# low) */
regmap_set_bits(k1->pmu, k1->pmu_off + PCIE_CLK_RESET_CONTROL,
PCIE_RC_PERST);
phy_exit(k1->phy);
k1_pcie_disable_resources(k1);
}
static const struct dw_pcie_host_ops k1_pcie_host_ops = {
.init = k1_pcie_init,
.deinit = k1_pcie_deinit,
};
static bool k1_pcie_link_up(struct dw_pcie *pci)
{
struct k1_pcie *k1 = to_k1_pcie(pci);
u32 val;
val = readl_relaxed(k1->link + K1_PHY_AHB_LINK_STS);
return (val & RDLH_LINK_UP) && (val & SMLH_LINK_UP);
}
static int k1_pcie_start_link(struct dw_pcie *pci)
{
struct k1_pcie *k1 = to_k1_pcie(pci);
u32 val;
/* Stop holding the PHY in reset, and enable link training */
regmap_update_bits(k1->pmu, k1->pmu_off + PCIE_CLK_RESET_CONTROL,
APP_HOLD_PHY_RST | LTSSM_EN, LTSSM_EN);
/* Enable the MSI interrupt */
writel_relaxed(MSI_CTRL_INT, k1->link + INTR_ENABLE);
/* Top-level interrupt enable */
val = readl_relaxed(k1->link + K1_PHY_AHB_IRQ_EN);
val |= PCIE_INTERRUPT_EN;
writel_relaxed(val, k1->link + K1_PHY_AHB_IRQ_EN);
return 0;
}
static void k1_pcie_stop_link(struct dw_pcie *pci)
{
struct k1_pcie *k1 = to_k1_pcie(pci);
u32 val;
/* Disable interrupts */
val = readl_relaxed(k1->link + K1_PHY_AHB_IRQ_EN);
val &= ~PCIE_INTERRUPT_EN;
writel_relaxed(val, k1->link + K1_PHY_AHB_IRQ_EN);
writel_relaxed(0, k1->link + INTR_ENABLE);
/* Disable the link and hold the PHY in reset */
regmap_update_bits(k1->pmu, k1->pmu_off + PCIE_CLK_RESET_CONTROL,
APP_HOLD_PHY_RST | LTSSM_EN, APP_HOLD_PHY_RST);
}
static const struct dw_pcie_ops k1_pcie_ops = {
.link_up = k1_pcie_link_up,
.start_link = k1_pcie_start_link,
.stop_link = k1_pcie_stop_link,
};
static int k1_pcie_parse_port(struct k1_pcie *k1)
{
struct device *dev = k1->pci.dev;
struct device_node *root_port;
struct phy *phy;
/* We assume only one root port */
root_port = of_get_next_available_child(dev_of_node(dev), NULL);
if (!root_port)
return -EINVAL;
phy = devm_of_phy_get(dev, root_port, NULL);
of_node_put(root_port);
if (IS_ERR(phy))
return PTR_ERR(phy);
k1->phy = phy;
return 0;
}
static int k1_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct k1_pcie *k1;
int ret;
k1 = devm_kzalloc(dev, sizeof(*k1), GFP_KERNEL);
if (!k1)
return -ENOMEM;
k1->pmu = syscon_regmap_lookup_by_phandle_args(dev_of_node(dev),
SYSCON_APMU, 1,
&k1->pmu_off);
if (IS_ERR(k1->pmu))
return dev_err_probe(dev, PTR_ERR(k1->pmu),
"failed to lookup PMU registers\n");
k1->link = devm_platform_ioremap_resource_byname(pdev, "link");
if (IS_ERR(k1->link))
return dev_err_probe(dev, PTR_ERR(k1->link),
"failed to map \"link\" registers\n");
k1->pci.dev = dev;
k1->pci.ops = &k1_pcie_ops;
k1->pci.pp.num_vectors = MAX_MSI_IRQS;
dw_pcie_cap_set(&k1->pci, REQ_RES);
k1->pci.pp.ops = &k1_pcie_host_ops;
/* Hold the PHY in reset until we start the link */
regmap_set_bits(k1->pmu, k1->pmu_off + PCIE_CLK_RESET_CONTROL,
APP_HOLD_PHY_RST);
ret = devm_regulator_get_enable(dev, "vpcie3v3");
if (ret)
return dev_err_probe(dev, ret,
"failed to get \"vpcie3v3\" supply\n");
pm_runtime_set_active(dev);
pm_runtime_no_callbacks(dev);
devm_pm_runtime_enable(dev);
platform_set_drvdata(pdev, k1);
ret = k1_pcie_parse_port(k1);
if (ret)
return dev_err_probe(dev, ret, "failed to parse root port\n");
ret = dw_pcie_host_init(&k1->pci.pp);
if (ret)
return dev_err_probe(dev, ret, "failed to initialize host\n");
return 0;
}
static void k1_pcie_remove(struct platform_device *pdev)
{
struct k1_pcie *k1 = platform_get_drvdata(pdev);
dw_pcie_host_deinit(&k1->pci.pp);
}
static const struct of_device_id k1_pcie_of_match_table[] = {
{ .compatible = "spacemit,k1-pcie", },
{ }
};
static struct platform_driver k1_pcie_driver = {
.probe = k1_pcie_probe,
.remove = k1_pcie_remove,
.driver = {
.name = "spacemit-k1-pcie",
.of_match_table = k1_pcie_of_match_table,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
module_platform_driver(k1_pcie_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SpacemiT K1 PCIe host driver");

View File

@@ -7,9 +7,9 @@
*/
#include <linux/clk.h>
#include <linux/gpio/consumer.h>
#include <linux/mfd/syscon.h>
#include <linux/of_platform.h>
#include <linux/of_gpio.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
@@ -37,36 +37,9 @@ static void stm32_pcie_ep_init(struct dw_pcie_ep *ep)
dw_pcie_ep_reset_bar(pci, bar);
}
static int stm32_pcie_enable_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_LTSSM_EN,
STM32MP25_PCIECR_LTSSM_EN);
return dw_pcie_wait_for_link(pci);
}
static void stm32_pcie_disable_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, STM32MP25_PCIECR_LTSSM_EN, 0);
}
static int stm32_pcie_start_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
int ret;
dev_dbg(pci->dev, "Enable link\n");
ret = stm32_pcie_enable_link(pci);
if (ret) {
dev_err(pci->dev, "PCIe cannot establish link: %d\n", ret);
return ret;
}
enable_irq(stm32_pcie->perst_irq);
@@ -77,11 +50,7 @@ static void stm32_pcie_stop_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
dev_dbg(pci->dev, "Disable link\n");
disable_irq(stm32_pcie->perst_irq);
stm32_pcie_disable_link(pci);
}
static int stm32_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
@@ -152,6 +121,9 @@ static void stm32_pcie_perst_assert(struct dw_pcie *pci)
dev_dbg(dev, "PERST asserted by host\n");
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_LTSSM_EN, 0);
pci_epc_deinit_notify(ep->epc);
stm32_pcie_disable_resources(stm32_pcie);
@@ -192,6 +164,11 @@ static void stm32_pcie_perst_deassert(struct dw_pcie *pci)
pci_epc_init_notify(ep->epc);
/* Enable link training */
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_LTSSM_EN,
STM32MP25_PCIECR_LTSSM_EN);
return;
err_disable_resources:
@@ -237,6 +214,8 @@ static int stm32_add_pcie_ep(struct stm32_pcie *stm32_pcie,
ep->ops = &stm32_pcie_ep_ops;
ep->page_size = stm32_pcie_epc_features.align;
ret = dw_pcie_ep_init(ep);
if (ret) {
dev_err(dev, "Failed to initialize ep: %d\n", ret);

View File

@@ -7,18 +7,30 @@
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/gpio/consumer.h>
#include <linux/irq.h>
#include <linux/mfd/syscon.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/phy/phy.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/pm_wakeirq.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include <linux/stddef.h>
#include "../../pci.h"
#include "pcie-designware.h"
#include "pcie-stm32.h"
#include "../../pci.h"
struct stm32_pcie {
struct dw_pcie pci;

View File

@@ -6,6 +6,9 @@
* Author: Christian Bruel <christian.bruel@foss.st.com>
*/
#include <linux/bits.h>
#include <linux/device.h>
#define to_stm32_pcie(x) dev_get_drvdata((x)->dev)
#define STM32MP25_PCIECR_TYPE_MASK GENMASK(11, 8)

View File

@@ -260,7 +260,6 @@ struct tegra_pcie_dw {
u32 msi_ctrl_int;
u32 num_lanes;
u32 cid;
u32 cfg_link_cap_l1sub;
u32 ras_des_cap;
u32 pcie_cap_base;
u32 aspm_cmrt;
@@ -475,8 +474,7 @@ static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
return IRQ_HANDLED;
/* If EP doesn't advertise L1SS, just return */
val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
if (!(val & (PCI_L1SS_CAP_ASPM_L1_1 | PCI_L1SS_CAP_ASPM_L1_2)))
if (!pci->l1ss_support)
return IRQ_HANDLED;
/* Check if BME is set to '1' */
@@ -608,24 +606,6 @@ static struct pci_ops tegra_pci_ops = {
};
#if defined(CONFIG_PCIEASPM)
static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
{
u32 val;
val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
val &= ~PCI_L1SS_CAP_ASPM_L1_1;
dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
}
static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
{
u32 val;
val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
val &= ~PCI_L1SS_CAP_ASPM_L1_2;
dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
}
static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
{
u32 val;
@@ -682,10 +662,9 @@ static int aspm_state_cnt(struct seq_file *s, void *data)
static void init_host_aspm(struct tegra_pcie_dw *pcie)
{
struct dw_pcie *pci = &pcie->pci;
u32 val;
u32 l1ss, val;
val = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
pcie->cfg_link_cap_l1sub = val + PCI_L1SS_CAP;
l1ss = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
pcie->ras_des_cap = dw_pcie_find_ext_capability(&pcie->pci,
PCI_EXT_CAP_ID_VNDR);
@@ -697,11 +676,14 @@ static void init_host_aspm(struct tegra_pcie_dw *pcie)
PCIE_RAS_DES_EVENT_COUNTER_CONTROL, val);
/* Program T_cmrt and T_pwr_on values */
val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
val = dw_pcie_readl_dbi(pci, l1ss + PCI_L1SS_CAP);
val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
val |= (pcie->aspm_cmrt << 8);
val |= (pcie->aspm_pwr_on_t << 19);
dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
dw_pcie_writel_dbi(pci, l1ss + PCI_L1SS_CAP, val);
if (pcie->supports_clkreq)
pci->l1ss_support = true;
/* Program L0s and L1 entrance latencies */
val = dw_pcie_readl_dbi(pci, PCIE_PORT_AFR);
@@ -726,8 +708,6 @@ static void init_debugfs(struct tegra_pcie_dw *pcie)
aspm_state_cnt);
}
#else
static inline void disable_aspm_l12(struct tegra_pcie_dw *pcie) { return; }
static inline void disable_aspm_l11(struct tegra_pcie_dw *pcie) { return; }
static inline void init_host_aspm(struct tegra_pcie_dw *pcie) { return; }
static inline void init_debugfs(struct tegra_pcie_dw *pcie) { return; }
#endif
@@ -931,12 +911,6 @@ static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp)
init_host_aspm(pcie);
/* Disable ASPM-L1SS advertisement if there is no CLKREQ routing */
if (!pcie->supports_clkreq) {
disable_aspm_l11(pcie);
disable_aspm_l12(pcie);
}
if (!pcie->of_data->has_l1ss_exit_fix) {
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
@@ -1871,12 +1845,6 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
init_host_aspm(pcie);
/* Disable ASPM-L1SS advertisement if there is no CLKREQ routing */
if (!pcie->supports_clkreq) {
disable_aspm_l11(pcie);
disable_aspm_l12(pcie);
}
if (!pcie->of_data->has_l1ss_exit_fix) {
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;

View File

@@ -53,16 +53,12 @@ struct pci_config_window *pci_host_common_ecam_create(struct device *dev,
EXPORT_SYMBOL_GPL(pci_host_common_ecam_create);
int pci_host_common_init(struct platform_device *pdev,
struct pci_host_bridge *bridge,
const struct pci_ecam_ops *ops)
{
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct pci_config_window *cfg;
bridge = devm_pci_alloc_host_bridge(dev, 0);
if (!bridge)
return -ENOMEM;
of_pci_check_probe_only();
platform_set_drvdata(pdev, bridge);
@@ -85,12 +81,17 @@ EXPORT_SYMBOL_GPL(pci_host_common_init);
int pci_host_common_probe(struct platform_device *pdev)
{
const struct pci_ecam_ops *ops;
struct pci_host_bridge *bridge;
ops = of_device_get_match_data(&pdev->dev);
if (!ops)
return -ENODEV;
return pci_host_common_init(pdev, ops);
bridge = devm_pci_alloc_host_bridge(&pdev->dev, 0);
if (!bridge)
return -ENOMEM;
return pci_host_common_init(pdev, bridge, ops);
}
EXPORT_SYMBOL_GPL(pci_host_common_probe);

View File

@@ -14,6 +14,7 @@ struct pci_ecam_ops;
int pci_host_common_probe(struct platform_device *pdev);
int pci_host_common_init(struct platform_device *pdev,
struct pci_host_bridge *bridge,
const struct pci_ecam_ops *ops);
void pci_host_common_remove(struct platform_device *pdev);

View File

@@ -3696,48 +3696,6 @@ static int hv_send_resources_released(struct hv_device *hdev)
return 0;
}
#define HVPCI_DOM_MAP_SIZE (64 * 1024)
static DECLARE_BITMAP(hvpci_dom_map, HVPCI_DOM_MAP_SIZE);
/*
* PCI domain number 0 is used by emulated devices on Gen1 VMs, so define 0
* as invalid for passthrough PCI devices of this driver.
*/
#define HVPCI_DOM_INVALID 0
/**
* hv_get_dom_num() - Get a valid PCI domain number
* Check if the PCI domain number is in use, and return another number if
* it is in use.
*
* @dom: Requested domain number
*
* return: domain number on success, HVPCI_DOM_INVALID on failure
*/
static u16 hv_get_dom_num(u16 dom)
{
unsigned int i;
if (test_and_set_bit(dom, hvpci_dom_map) == 0)
return dom;
for_each_clear_bit(i, hvpci_dom_map, HVPCI_DOM_MAP_SIZE) {
if (test_and_set_bit(i, hvpci_dom_map) == 0)
return i;
}
return HVPCI_DOM_INVALID;
}
/**
* hv_put_dom_num() - Mark the PCI domain number as free
* @dom: Domain number to be freed
*/
static void hv_put_dom_num(u16 dom)
{
clear_bit(dom, hvpci_dom_map);
}
/**
* hv_pci_probe() - New VMBus channel probe, for a root PCI bus
* @hdev: VMBus's tracking struct for this root PCI bus
@@ -3750,9 +3708,9 @@ static int hv_pci_probe(struct hv_device *hdev,
{
struct pci_host_bridge *bridge;
struct hv_pcibus_device *hbus;
u16 dom_req, dom;
int ret, dom;
u16 dom_req;
char *name;
int ret;
bridge = devm_pci_alloc_host_bridge(&hdev->device, 0);
if (!bridge)
@@ -3779,11 +3737,14 @@ static int hv_pci_probe(struct hv_device *hdev,
* PCI bus (which is actually emulated by the hypervisor) is domain 0.
* (2) There will be no overlap between domains (after fixing possible
* collisions) in the same VM.
*
* Because Gen1 VMs use domain 0, don't allow picking domain 0 here,
* even if bytes 4 and 5 of the instance GUID are both zero. For wider
* userspace compatibility, limit the domain ID to a 16-bit value.
*/
dom_req = hdev->dev_instance.b[5] << 8 | hdev->dev_instance.b[4];
dom = hv_get_dom_num(dom_req);
if (dom == HVPCI_DOM_INVALID) {
dom = pci_bus_find_emul_domain_nr(dom_req, 1, U16_MAX);
if (dom < 0) {
dev_err(&hdev->device,
"Unable to use dom# 0x%x or other numbers", dom_req);
ret = -EINVAL;
@@ -3917,7 +3878,7 @@ close:
destroy_wq:
destroy_workqueue(hbus->wq);
free_dom:
hv_put_dom_num(hbus->bridge->domain_nr);
pci_bus_release_emul_domain_nr(hbus->bridge->domain_nr);
free_bus:
kfree(hbus);
return ret;
@@ -4042,8 +4003,6 @@ static void hv_pci_remove(struct hv_device *hdev)
irq_domain_remove(hbus->irq_domain);
irq_domain_free_fwnode(hbus->fwnode);
hv_put_dom_num(hbus->bridge->domain_nr);
kfree(hbus);
}
@@ -4217,9 +4176,6 @@ static int __init init_hv_pci_drv(void)
if (ret)
return ret;
/* Set the invalid domain number's bit, so it will not be used */
set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);
/* Initialize PCI block r/w interface */
hvpci_block_ops.read_block = hv_read_config_block;
hvpci_block_ops.write_block = hv_write_config_block;

View File

@@ -214,6 +214,7 @@ static u32 ixp4xx_crp_byte_lane_enable_bits(u32 n, int size)
return 0xffffffff;
}
#ifdef CONFIG_ARM
static int ixp4xx_crp_read_config(struct ixp4xx_pci *p, int where, int size,
u32 *value)
{
@@ -251,6 +252,7 @@ static int ixp4xx_crp_read_config(struct ixp4xx_pci *p, int where, int size,
return PCIBIOS_SUCCESSFUL;
}
#endif
static int ixp4xx_crp_write_config(struct ixp4xx_pci *p, int where, int size,
u32 value)
@@ -470,6 +472,7 @@ static int ixp4xx_pci_parse_map_dma_ranges(struct ixp4xx_pci *p)
return 0;
}
#ifdef CONFIG_ARM
/* Only used to get context for abort handling */
static struct ixp4xx_pci *ixp4xx_pci_abort_singleton;
@@ -509,6 +512,7 @@ static int ixp4xx_pci_abort_handler(unsigned long addr, unsigned int fsr,
return 0;
}
#endif
static int __init ixp4xx_pci_probe(struct platform_device *pdev)
{
@@ -555,10 +559,12 @@ static int __init ixp4xx_pci_probe(struct platform_device *pdev)
dev_info(dev, "controller is in %s mode\n",
p->host_mode ? "host" : "option");
#ifdef CONFIG_ARM
/* Hook in our fault handler for PCI errors */
ixp4xx_pci_abort_singleton = p;
hook_fault_code(16+6, ixp4xx_pci_abort_handler, SIGBUS, 0,
"imprecise external abort");
#endif
ret = ixp4xx_pci_parse_map_ranges(p);
if (ret)

View File

@@ -187,7 +187,6 @@ struct apple_pcie {
const struct hw_info *hw;
unsigned long *bitmap;
struct list_head ports;
struct list_head entry;
struct completion event;
struct irq_fwspec fwspec;
u32 nvecs;
@@ -206,9 +205,6 @@ struct apple_pcie_port {
int idx;
};
static LIST_HEAD(pcie_list);
static DEFINE_MUTEX(pcie_list_lock);
static void rmw_set(u32 set, void __iomem *addr)
{
writel_relaxed(readl_relaxed(addr) | set, addr);
@@ -724,32 +720,9 @@ static int apple_msi_init(struct apple_pcie *pcie)
return 0;
}
static void apple_pcie_register(struct apple_pcie *pcie)
{
guard(mutex)(&pcie_list_lock);
list_add_tail(&pcie->entry, &pcie_list);
}
static void apple_pcie_unregister(struct apple_pcie *pcie)
{
guard(mutex)(&pcie_list_lock);
list_del(&pcie->entry);
}
static struct apple_pcie *apple_pcie_lookup(struct device *dev)
{
struct apple_pcie *pcie;
guard(mutex)(&pcie_list_lock);
list_for_each_entry(pcie, &pcie_list, entry) {
if (pcie->dev == dev)
return pcie;
}
return NULL;
return pci_host_bridge_priv(dev_get_drvdata(dev));
}
static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev)
@@ -875,13 +848,15 @@ static const struct pci_ecam_ops apple_pcie_cfg_ecam_ops = {
static int apple_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct apple_pcie *pcie;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!bridge)
return -ENOMEM;
pcie = pci_host_bridge_priv(bridge);
pcie->dev = dev;
pcie->hw = of_device_get_match_data(dev);
if (!pcie->hw)
@@ -897,13 +872,7 @@ static int apple_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
apple_pcie_register(pcie);
ret = pci_host_common_init(pdev, &apple_pcie_cfg_ecam_ops);
if (ret)
apple_pcie_unregister(pcie);
return ret;
return pci_host_common_init(pdev, bridge, &apple_pcie_cfg_ecam_ops);
}
static const struct of_device_id apple_pcie_of_match[] = {

View File

@@ -14,15 +14,18 @@
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kdebug.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/log2.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/notifier.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include <linux/panic_notifier.h>
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/printk.h>
@@ -30,7 +33,9 @@
#include <linux/reset.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/string_choices.h>
#include <linux/types.h>
#include "../pci.h"
@@ -48,7 +53,6 @@
#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY 0x04dc
#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_MAX_LINK_WIDTH_MASK 0x1f0
#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK 0xc00
#define PCIE_RC_CFG_PRIV1_ROOT_CAP 0x4f8
#define PCIE_RC_CFG_PRIV1_ROOT_CAP_L1SS_MODE_MASK 0xf8
@@ -155,8 +159,40 @@
#define MSI_INT_MASK_SET 0x10
#define MSI_INT_MASK_CLR 0x14
/* Error report registers */
#define PCIE_OUTB_ERR_TREAT 0x6000
#define PCIE_OUTB_ERR_TREAT_CONFIG 0x1
#define PCIE_OUTB_ERR_TREAT_MEM 0x2
#define PCIE_OUTB_ERR_VALID 0x6004
#define PCIE_OUTB_ERR_CLEAR 0x6008
#define PCIE_OUTB_ERR_ACC_INFO 0x600c
#define PCIE_OUTB_ERR_ACC_INFO_CFG_ERR BIT(0)
#define PCIE_OUTB_ERR_ACC_INFO_MEM_ERR BIT(1)
#define PCIE_OUTB_ERR_ACC_INFO_TYPE_64 BIT(2)
#define PCIE_OUTB_ERR_ACC_INFO_DIR_WRITE BIT(4)
#define PCIE_OUTB_ERR_ACC_INFO_BYTE_LANES 0xff00
#define PCIE_OUTB_ERR_ACC_ADDR 0x6010
#define PCIE_OUTB_ERR_ACC_ADDR_BUS 0xff00000
#define PCIE_OUTB_ERR_ACC_ADDR_DEV 0xf8000
#define PCIE_OUTB_ERR_ACC_ADDR_FUNC 0x7000
#define PCIE_OUTB_ERR_ACC_ADDR_REG 0xfff
#define PCIE_OUTB_ERR_CFG_CAUSE 0x6014
#define PCIE_OUTB_ERR_CFG_CAUSE_TIMEOUT BIT(6)
#define PCIE_OUTB_ERR_CFG_CAUSE_ABORT BIT(5)
#define PCIE_OUTB_ERR_CFG_CAUSE_UNSUPP_REQ BIT(4)
#define PCIE_OUTB_ERR_CFG_CAUSE_ACC_TIMEOUT BIT(2)
#define PCIE_OUTB_ERR_CFG_CAUSE_ACC_DISABLED BIT(1)
#define PCIE_OUTB_ERR_CFG_CAUSE_ACC_64BIT BIT(0)
#define PCIE_OUTB_ERR_MEM_ADDR_LO 0x6018
#define PCIE_OUTB_ERR_MEM_ADDR_HI 0x601c
#define PCIE_OUTB_ERR_MEM_CAUSE 0x6020
#define PCIE_OUTB_ERR_MEM_CAUSE_TIMEOUT BIT(6)
#define PCIE_OUTB_ERR_MEM_CAUSE_ABORT BIT(5)
#define PCIE_OUTB_ERR_MEM_CAUSE_UNSUPP_REQ BIT(4)
#define PCIE_OUTB_ERR_MEM_CAUSE_ACC_DISABLED BIT(1)
#define PCIE_OUTB_ERR_MEM_CAUSE_BAD_ADDR BIT(0)
#define PCIE_RGR1_SW_INIT_1_PERST_MASK 0x1
#define PCIE_RGR1_SW_INIT_1_PERST_SHIFT 0x0
#define RGR1_SW_INIT_1_INIT_GENERIC_MASK 0x2
#define RGR1_SW_INIT_1_INIT_GENERIC_SHIFT 0x1
@@ -259,6 +295,7 @@ struct pcie_cfg_data {
int (*perst_set)(struct brcm_pcie *pcie, u32 val);
int (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val);
int (*post_setup)(struct brcm_pcie *pcie);
bool has_err_report;
};
struct subdev_regulators {
@@ -303,6 +340,10 @@ struct brcm_pcie {
struct subdev_regulators *sr;
bool ep_wakeup_capable;
const struct pcie_cfg_data *cfg;
bool bridge_in_reset;
struct notifier_block die_notifier;
struct notifier_block panic_notifier;
spinlock_t bridge_lock;
};
static inline bool is_bmips(const struct brcm_pcie *pcie)
@@ -310,6 +351,24 @@ static inline bool is_bmips(const struct brcm_pcie *pcie)
return pcie->cfg->soc_base == BCM7435 || pcie->cfg->soc_base == BCM7425;
}
static int brcm_pcie_bridge_sw_init_set(struct brcm_pcie *pcie, u32 val)
{
unsigned long flags;
int ret;
if (pcie->cfg->has_err_report)
spin_lock_irqsave(&pcie->bridge_lock, flags);
ret = pcie->cfg->bridge_sw_init_set(pcie, val);
/* If we fail, assume the bridge is in reset (off) */
pcie->bridge_in_reset = ret ? true : val;
if (pcie->cfg->has_err_report)
spin_unlock_irqrestore(&pcie->bridge_lock, flags);
return ret;
}
/*
* This is to convert the size of the inbound "BAR" region to the
* non-linear values of PCIE_X_MISC_RC_BAR[123]_CONFIG_LO.SIZE
@@ -1075,13 +1134,13 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
void __iomem *base = pcie->base;
struct pci_host_bridge *bridge;
struct resource_entry *entry;
u32 tmp, burst, aspm_support, num_lanes, num_lanes_cap;
u32 tmp, burst, num_lanes, num_lanes_cap;
u8 num_out_wins = 0;
int num_inbound_wins = 0;
int memc, ret;
/* Reset the bridge */
ret = pcie->cfg->bridge_sw_init_set(pcie, 1);
ret = brcm_pcie_bridge_sw_init_set(pcie, 1);
if (ret)
return ret;
@@ -1097,7 +1156,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
usleep_range(100, 200);
/* Take the bridge out of reset */
ret = pcie->cfg->bridge_sw_init_set(pcie, 0);
ret = brcm_pcie_bridge_sw_init_set(pcie, 0);
if (ret)
return ret;
@@ -1175,12 +1234,9 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
/* Don't advertise L0s capability if 'aspm-no-l0s' */
aspm_support = PCIE_LINK_STATE_L1;
if (!of_property_read_bool(pcie->np, "aspm-no-l0s"))
aspm_support |= PCIE_LINK_STATE_L0S;
tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
u32p_replace_bits(&tmp, aspm_support,
PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK);
if (of_property_read_bool(pcie->np, "aspm-no-l0s"))
tmp &= ~PCI_EXP_LNKCAP_ASPM_L0S;
writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
/* 'tmp' still holds the contents of PRIV1_LINK_CAPABILITY */
@@ -1565,7 +1621,7 @@ static int brcm_pcie_turn_off(struct brcm_pcie *pcie)
if (!(pcie->cfg->quirks & CFG_QUIRK_AVOID_BRIDGE_SHUTDOWN))
/* Shutdown PCIe bridge */
ret = pcie->cfg->bridge_sw_init_set(pcie, 1);
ret = brcm_pcie_bridge_sw_init_set(pcie, 1);
return ret;
}
@@ -1653,7 +1709,9 @@ static int brcm_pcie_resume_noirq(struct device *dev)
goto err_reset;
/* Take bridge out of reset so we can access the SERDES reg */
pcie->cfg->bridge_sw_init_set(pcie, 0);
ret = brcm_pcie_bridge_sw_init_set(pcie, 0);
if (ret)
goto err_reset;
/* SERDES_IDDQ = 0 */
tmp = readl(base + HARD_DEBUG(pcie));
@@ -1707,6 +1765,119 @@ err_disable_clk:
return ret;
}
/* Dump out PCIe errors on die or panic */
static int brcm_pcie_dump_err(struct brcm_pcie *pcie,
const char *type)
{
void __iomem *base = pcie->base;
int i, is_cfg_err, is_mem_err, lanes;
const char *width_str, *direction_str;
u32 info, cfg_addr, cfg_cause, mem_cause, lo, hi;
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
unsigned long flags;
char lanes_str[9];
spin_lock_irqsave(&pcie->bridge_lock, flags);
/* Don't access registers when the bridge is off */
if (pcie->bridge_in_reset || readl(base + PCIE_OUTB_ERR_VALID) == 0) {
spin_unlock_irqrestore(&pcie->bridge_lock, flags);
return NOTIFY_DONE;
}
/* Read all necessary registers so we can release the spinlock ASAP */
info = readl(base + PCIE_OUTB_ERR_ACC_INFO);
is_cfg_err = !!(info & PCIE_OUTB_ERR_ACC_INFO_CFG_ERR);
is_mem_err = !!(info & PCIE_OUTB_ERR_ACC_INFO_MEM_ERR);
if (is_cfg_err) {
cfg_addr = readl(base + PCIE_OUTB_ERR_ACC_ADDR);
cfg_cause = readl(base + PCIE_OUTB_ERR_CFG_CAUSE);
}
if (is_mem_err) {
mem_cause = readl(base + PCIE_OUTB_ERR_MEM_CAUSE);
lo = readl(base + PCIE_OUTB_ERR_MEM_ADDR_LO);
hi = readl(base + PCIE_OUTB_ERR_MEM_ADDR_HI);
}
/* We've got all of the info, clear the error */
writel(1, base + PCIE_OUTB_ERR_CLEAR);
spin_unlock_irqrestore(&pcie->bridge_lock, flags);
dev_err(pcie->dev, "reporting PCIe info which may be related to %s error\n",
type);
width_str = (info & PCIE_OUTB_ERR_ACC_INFO_TYPE_64) ? "64bit" : "32bit";
direction_str = str_read_write(!(info & PCIE_OUTB_ERR_ACC_INFO_DIR_WRITE));
lanes = FIELD_GET(PCIE_OUTB_ERR_ACC_INFO_BYTE_LANES, info);
for (i = 0, lanes_str[8] = 0; i < 8; i++)
lanes_str[i] = (lanes & (1 << i)) ? '1' : '0';
if (is_cfg_err) {
int bus = FIELD_GET(PCIE_OUTB_ERR_ACC_ADDR_BUS, cfg_addr);
int dev = FIELD_GET(PCIE_OUTB_ERR_ACC_ADDR_DEV, cfg_addr);
int func = FIELD_GET(PCIE_OUTB_ERR_ACC_ADDR_FUNC, cfg_addr);
int reg = FIELD_GET(PCIE_OUTB_ERR_ACC_ADDR_REG, cfg_addr);
dev_err(pcie->dev, "Error: CFG Acc, %s, %s (%04x:%02x:%02x.%d) reg=0x%x, lanes=%s\n",
width_str, direction_str, bridge->domain_nr, bus, dev,
func, reg, lanes_str);
dev_err(pcie->dev, " Type: TO=%d Abt=%d UnsupReq=%d AccTO=%d AccDsbld=%d Acc64bit=%d\n",
!!(cfg_cause & PCIE_OUTB_ERR_CFG_CAUSE_TIMEOUT),
!!(cfg_cause & PCIE_OUTB_ERR_CFG_CAUSE_ABORT),
!!(cfg_cause & PCIE_OUTB_ERR_CFG_CAUSE_UNSUPP_REQ),
!!(cfg_cause & PCIE_OUTB_ERR_CFG_CAUSE_ACC_TIMEOUT),
!!(cfg_cause & PCIE_OUTB_ERR_CFG_CAUSE_ACC_DISABLED),
!!(cfg_cause & PCIE_OUTB_ERR_CFG_CAUSE_ACC_64BIT));
}
if (is_mem_err) {
u64 addr = ((u64)hi << 32) | (u64)lo;
dev_err(pcie->dev, "Error: Mem Acc, %s, %s, @0x%llx, lanes=%s\n",
width_str, direction_str, addr, lanes_str);
dev_err(pcie->dev, " Type: TO=%d Abt=%d UnsupReq=%d AccDsble=%d BadAddr=%d\n",
!!(mem_cause & PCIE_OUTB_ERR_MEM_CAUSE_TIMEOUT),
!!(mem_cause & PCIE_OUTB_ERR_MEM_CAUSE_ABORT),
!!(mem_cause & PCIE_OUTB_ERR_MEM_CAUSE_UNSUPP_REQ),
!!(mem_cause & PCIE_OUTB_ERR_MEM_CAUSE_ACC_DISABLED),
!!(mem_cause & PCIE_OUTB_ERR_MEM_CAUSE_BAD_ADDR));
}
return NOTIFY_DONE;
}
static int brcm_pcie_die_notify_cb(struct notifier_block *self,
unsigned long v, void *p)
{
struct brcm_pcie *pcie =
container_of(self, struct brcm_pcie, die_notifier);
return brcm_pcie_dump_err(pcie, "Die");
}
static int brcm_pcie_panic_notify_cb(struct notifier_block *self,
unsigned long v, void *p)
{
struct brcm_pcie *pcie =
container_of(self, struct brcm_pcie, panic_notifier);
return brcm_pcie_dump_err(pcie, "Panic");
}
static void brcm_register_die_notifiers(struct brcm_pcie *pcie)
{
pcie->panic_notifier.notifier_call = brcm_pcie_panic_notify_cb;
atomic_notifier_chain_register(&panic_notifier_list,
&pcie->panic_notifier);
pcie->die_notifier.notifier_call = brcm_pcie_die_notify_cb;
register_die_notifier(&pcie->die_notifier);
}
static void brcm_unregister_die_notifiers(struct brcm_pcie *pcie)
{
unregister_die_notifier(&pcie->die_notifier);
atomic_notifier_chain_unregister(&panic_notifier_list,
&pcie->panic_notifier);
}
static void __brcm_pcie_remove(struct brcm_pcie *pcie)
{
brcm_msi_remove(pcie);
@@ -1725,6 +1896,9 @@ static void brcm_pcie_remove(struct platform_device *pdev)
pci_stop_root_bus(bridge->bus);
pci_remove_root_bus(bridge->bus);
if (pcie->cfg->has_err_report)
brcm_unregister_die_notifiers(pcie);
__brcm_pcie_remove(pcie);
}
@@ -1825,6 +1999,7 @@ static const struct pcie_cfg_data bcm7216_cfg = {
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_7278,
.has_phy = true,
.num_inbound_wins = 3,
.has_err_report = true,
};
static const struct pcie_cfg_data bcm7712_cfg = {
@@ -1921,7 +2096,10 @@ static int brcm_pcie_probe(struct platform_device *pdev)
if (ret)
return dev_err_probe(&pdev->dev, ret, "could not enable clock\n");
pcie->cfg->bridge_sw_init_set(pcie, 0);
ret = brcm_pcie_bridge_sw_init_set(pcie, 0);
if (ret)
return dev_err_probe(&pdev->dev, ret,
"could not de-assert bridge reset\n");
if (pcie->swinit_reset) {
ret = reset_control_assert(pcie->swinit_reset);
@@ -1996,6 +2174,11 @@ static int brcm_pcie_probe(struct platform_device *pdev)
return ret;
}
if (pcie->cfg->has_err_report) {
spin_lock_init(&pcie->bridge_lock);
brcm_register_die_notifiers(pcie);
}
return 0;
fail:

View File

@@ -142,24 +142,34 @@
struct mtk_pcie_port;
/**
* enum mtk_pcie_quirks - MTK PCIe quirks
* @MTK_PCIE_FIX_CLASS_ID: host's class ID needed to be fixed
* @MTK_PCIE_FIX_DEVICE_ID: host's device ID needed to be fixed
* @MTK_PCIE_NO_MSI: Bridge has no MSI support, and relies on an external block
* @MTK_PCIE_SKIP_RSTB: Skip calling RSTB bits on PCIe probe
*/
enum mtk_pcie_quirks {
MTK_PCIE_FIX_CLASS_ID = BIT(0),
MTK_PCIE_FIX_DEVICE_ID = BIT(1),
MTK_PCIE_NO_MSI = BIT(2),
MTK_PCIE_SKIP_RSTB = BIT(3),
};
/**
* struct mtk_pcie_soc - differentiate between host generations
* @need_fix_class_id: whether this host's class ID needed to be fixed or not
* @need_fix_device_id: whether this host's device ID needed to be fixed or not
* @no_msi: Bridge has no MSI support, and relies on an external block
* @device_id: device ID which this host need to be fixed
* @ops: pointer to configuration access functions
* @startup: pointer to controller setting functions
* @setup_irq: pointer to initialize IRQ functions
* @quirks: PCIe device quirks.
*/
struct mtk_pcie_soc {
bool need_fix_class_id;
bool need_fix_device_id;
bool no_msi;
unsigned int device_id;
struct pci_ops *ops;
int (*startup)(struct mtk_pcie_port *port);
int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node);
enum mtk_pcie_quirks quirks;
};
/**
@@ -679,31 +689,28 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
regmap_update_bits(pcie->cfg, PCIE_SYS_CFG_V2, val, val);
}
/* Assert all reset signals */
writel(0, port->base + PCIE_RST_CTRL);
if (!(soc->quirks & MTK_PCIE_SKIP_RSTB)) {
/* Assert all reset signals */
writel(0, port->base + PCIE_RST_CTRL);
/*
* Enable PCIe link down reset, if link status changed from link up to
* link down, this will reset MAC control registers and configuration
* space.
*/
writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
/*
* Enable PCIe link down reset, if link status changed from
* link up to link down, this will reset MAC control registers
* and configuration space.
*/
writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
/*
* Described in PCIe CEM specification sections 2.2 (PERST# Signal) and
* 2.2.1 (Initial Power-Up (G3 to S0)). The deassertion of PERST# should
* be delayed 100ms (TPVPERL) for the power and clock to become stable.
*/
msleep(100);
msleep(PCIE_T_PVPERL_MS);
/* De-assert PHY, PE, PIPE, MAC and configuration reset */
val = readl(port->base + PCIE_RST_CTRL);
val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |
PCIE_MAC_SRSTB | PCIE_CRSTB;
writel(val, port->base + PCIE_RST_CTRL);
/* De-assert PHY, PE, PIPE, MAC and configuration reset */
val = readl(port->base + PCIE_RST_CTRL);
val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |
PCIE_MAC_SRSTB | PCIE_CRSTB;
writel(val, port->base + PCIE_RST_CTRL);
}
/* Set up vendor ID and class code */
if (soc->need_fix_class_id) {
if (soc->quirks & MTK_PCIE_FIX_CLASS_ID) {
val = PCI_VENDOR_ID_MEDIATEK;
writew(val, port->base + PCIE_CONF_VEND_ID);
@@ -711,7 +718,7 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
writew(val, port->base + PCIE_CONF_CLASS_ID);
}
if (soc->need_fix_device_id)
if (soc->quirks & MTK_PCIE_FIX_DEVICE_ID)
writew(soc->device_id, port->base + PCIE_CONF_DEVICE_ID);
/* 100ms timeout value should be enough for Gen1/2 training */
@@ -821,6 +828,41 @@ static int mtk_pcie_startup_port(struct mtk_pcie_port *port)
return 0;
}
static int mtk_pcie_startup_port_an7583(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = port->pcie;
struct device *dev = pcie->dev;
struct pci_host_bridge *host;
struct resource_entry *entry;
struct regmap *pbus_regmap;
resource_size_t addr;
u32 args[2], size;
/*
* Configure PBus base address and base address mask to allow
* the hw to detect if a given address is accessible on PCIe
* controller.
*/
pbus_regmap = syscon_regmap_lookup_by_phandle_args(dev->of_node,
"mediatek,pbus-csr",
ARRAY_SIZE(args),
args);
if (IS_ERR(pbus_regmap))
return PTR_ERR(pbus_regmap);
host = pci_host_bridge_from_priv(pcie);
entry = resource_list_first_type(&host->windows, IORESOURCE_MEM);
if (!entry)
return -ENODEV;
addr = entry->res->start - entry->offset;
regmap_write(pbus_regmap, args[0], lower_32_bits(addr));
size = lower_32_bits(resource_size(entry->res));
regmap_write(pbus_regmap, args[1], GENMASK(31, __fls(size)));
return mtk_pcie_startup_port_v2(port);
}
static void mtk_pcie_enable_port(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = port->pcie;
@@ -1099,7 +1141,7 @@ static int mtk_pcie_probe(struct platform_device *pdev)
host->ops = pcie->soc->ops;
host->sysdata = pcie;
host->msi_domain = pcie->soc->no_msi;
host->msi_domain = !!(pcie->soc->quirks & MTK_PCIE_NO_MSI);
err = pci_host_probe(host);
if (err)
@@ -1187,9 +1229,9 @@ static const struct dev_pm_ops mtk_pcie_pm_ops = {
};
static const struct mtk_pcie_soc mtk_pcie_soc_v1 = {
.no_msi = true,
.ops = &mtk_pcie_ops,
.startup = mtk_pcie_startup_port,
.quirks = MTK_PCIE_NO_MSI,
};
static const struct mtk_pcie_soc mtk_pcie_soc_mt2712 = {
@@ -1199,22 +1241,29 @@ static const struct mtk_pcie_soc mtk_pcie_soc_mt2712 = {
};
static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = {
.need_fix_class_id = true,
.ops = &mtk_pcie_ops_v2,
.startup = mtk_pcie_startup_port_v2,
.setup_irq = mtk_pcie_setup_irq,
.quirks = MTK_PCIE_FIX_CLASS_ID,
};
static const struct mtk_pcie_soc mtk_pcie_soc_an7583 = {
.ops = &mtk_pcie_ops_v2,
.startup = mtk_pcie_startup_port_an7583,
.setup_irq = mtk_pcie_setup_irq,
.quirks = MTK_PCIE_FIX_CLASS_ID | MTK_PCIE_SKIP_RSTB,
};
static const struct mtk_pcie_soc mtk_pcie_soc_mt7629 = {
.need_fix_class_id = true,
.need_fix_device_id = true,
.device_id = PCI_DEVICE_ID_MEDIATEK_7629,
.ops = &mtk_pcie_ops_v2,
.startup = mtk_pcie_startup_port_v2,
.setup_irq = mtk_pcie_setup_irq,
.quirks = MTK_PCIE_FIX_CLASS_ID | MTK_PCIE_FIX_DEVICE_ID,
};
static const struct of_device_id mtk_pcie_ids[] = {
{ .compatible = "airoha,an7583-pcie", .data = &mtk_pcie_soc_an7583 },
{ .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 },
{ .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 },
{ .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 },

File diff suppressed because it is too large Load Diff

View File

@@ -578,22 +578,6 @@ static void vmd_detach_resources(struct vmd_dev *vmd)
vmd->dev->resource[VMD_MEMBAR2].child = NULL;
}
/*
* VMD domains start at 0x10000 to not clash with ACPI _SEG domains.
* Per ACPI r6.0, sec 6.5.6, _SEG returns an integer, of which the lower
* 16 bits are the PCI Segment Group (domain) number. Other bits are
* currently reserved.
*/
static int vmd_find_free_domain(void)
{
int domain = 0xffff;
struct pci_bus *bus = NULL;
while ((bus = pci_find_next_bus(bus)) != NULL)
domain = max_t(int, domain, pci_domain_nr(bus));
return domain + 1;
}
static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
resource_size_t *offset1,
resource_size_t *offset2)
@@ -878,13 +862,6 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
.parent = res,
};
sd->vmd_dev = vmd->dev;
sd->domain = vmd_find_free_domain();
if (sd->domain < 0)
return sd->domain;
sd->node = pcibus_to_node(vmd->dev->bus);
/*
* Currently MSI remapping must be enabled in guest passthrough mode
* due to some missing interrupt remapping plumbing. This is probably
@@ -910,9 +887,24 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
sd->vmd_dev = vmd->dev;
/*
* Emulated domains start at 0x10000 to not clash with ACPI _SEG
* domains. Per ACPI r6.0, sec 6.5.6, _SEG returns an integer, of
* which the lower 16 bits are the PCI Segment Group (domain) number.
* Other bits are currently reserved.
*/
sd->domain = pci_bus_find_emul_domain_nr(0, 0x10000, INT_MAX);
if (sd->domain < 0)
return sd->domain;
sd->node = pcibus_to_node(vmd->dev->bus);
vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
&vmd_ops, sd, &resources);
if (!vmd->bus) {
pci_bus_release_emul_domain_nr(sd->domain);
pci_free_resource_list(&resources);
vmd_remove_irq_domain(vmd);
return -ENODEV;
@@ -1005,6 +997,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
return -ENOMEM;
vmd->dev = dev;
vmd->sysdata.domain = PCI_DOMAIN_NR_NOT_SET;
vmd->instance = ida_alloc(&vmd_instance_ida, GFP_KERNEL);
if (vmd->instance < 0)
return vmd->instance;
@@ -1070,6 +1063,7 @@ static void vmd_remove(struct pci_dev *dev)
vmd_detach_resources(vmd);
vmd_remove_irq_domain(vmd);
ida_free(&vmd_instance_ida, vmd->instance);
pci_bus_release_emul_domain_nr(vmd->sysdata.domain);
}
static void vmd_shutdown(struct pci_dev *dev)

View File

@@ -729,8 +729,9 @@ static void pci_epf_test_enable_doorbell(struct pci_epf_test *epf_test,
if (bar < BAR_0)
goto err_doorbell_cleanup;
ret = request_irq(epf->db_msg[0].virq, pci_epf_test_doorbell_handler, 0,
"pci-ep-test-doorbell", epf_test);
ret = request_threaded_irq(epf->db_msg[0].virq, NULL,
pci_epf_test_doorbell_handler, IRQF_ONESHOT,
"pci-ep-test-doorbell", epf_test);
if (ret) {
dev_err(&epf->dev,
"Failed to request doorbell IRQ: %d\n",

View File

@@ -36,11 +36,13 @@
* PCIe Root Port PCI EP
*/
#include <linux/atomic.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/pci-ep-msi.h>
#include <linux/pci-epc.h>
#include <linux/pci-epf.h>
#include <linux/ntb.h>
@@ -126,12 +128,13 @@ struct epf_ntb {
u32 db_count;
u32 spad_count;
u64 mws_size[MAX_MW];
u64 db;
atomic64_t db;
u32 vbus_number;
u16 vntb_pid;
u16 vntb_vid;
bool linkup;
bool msi_doorbell;
u32 spad_size;
enum pci_barno epf_ntb_bar[VNTB_BAR_NUM];
@@ -258,9 +261,9 @@ static void epf_ntb_cmd_handler(struct work_struct *work)
ntb = container_of(work, struct epf_ntb, cmd_handler.work);
for (i = 1; i < ntb->db_count; i++) {
for (i = 1; i < ntb->db_count && !ntb->msi_doorbell; i++) {
if (ntb->epf_db[i]) {
ntb->db |= 1 << (i - 1);
atomic64_or(1 << (i - 1), &ntb->db);
ntb_db_event(&ntb->ntb, i);
ntb->epf_db[i] = 0;
}
@@ -319,7 +322,21 @@ static void epf_ntb_cmd_handler(struct work_struct *work)
reset_handler:
queue_delayed_work(kpcintb_workqueue, &ntb->cmd_handler,
msecs_to_jiffies(5));
ntb->msi_doorbell ? msecs_to_jiffies(500) : msecs_to_jiffies(5));
}
static irqreturn_t epf_ntb_doorbell_handler(int irq, void *data)
{
struct epf_ntb *ntb = data;
int i;
for (i = 1; i < ntb->db_count; i++)
if (irq == ntb->epf->db_msg[i].virq) {
atomic64_or(1 << (i - 1), &ntb->db);
ntb_db_event(&ntb->ntb, i);
}
return IRQ_HANDLED;
}
/**
@@ -500,6 +517,94 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb)
return 0;
}
static int epf_ntb_db_bar_init_msi_doorbell(struct epf_ntb *ntb,
struct pci_epf_bar *db_bar,
const struct pci_epc_features *epc_features,
enum pci_barno barno)
{
struct pci_epf *epf = ntb->epf;
dma_addr_t low, high;
struct msi_msg *msg;
size_t sz;
int ret;
int i;
ret = pci_epf_alloc_doorbell(epf, ntb->db_count);
if (ret)
return ret;
for (i = 0; i < ntb->db_count; i++) {
ret = request_irq(epf->db_msg[i].virq, epf_ntb_doorbell_handler,
0, "pci_epf_vntb_db", ntb);
if (ret) {
dev_err(&epf->dev,
"Failed to request doorbell IRQ: %d\n",
epf->db_msg[i].virq);
goto err_free_irq;
}
}
msg = &epf->db_msg[0].msg;
high = 0;
low = (u64)msg->address_hi << 32 | msg->address_lo;
for (i = 0; i < ntb->db_count; i++) {
struct msi_msg *msg = &epf->db_msg[i].msg;
dma_addr_t addr = (u64)msg->address_hi << 32 | msg->address_lo;
low = min(low, addr);
high = max(high, addr);
}
sz = high - low + sizeof(u32);
ret = pci_epf_assign_bar_space(epf, sz, barno, epc_features, 0, low);
if (ret) {
dev_err(&epf->dev, "Failed to assign Doorbell BAR space\n");
goto err_free_irq;
}
ret = pci_epc_set_bar(ntb->epf->epc, ntb->epf->func_no,
ntb->epf->vfunc_no, db_bar);
if (ret) {
dev_err(&epf->dev, "Failed to set Doorbell BAR\n");
goto err_free_irq;
}
for (i = 0; i < ntb->db_count; i++) {
struct msi_msg *msg = &epf->db_msg[i].msg;
dma_addr_t addr;
size_t offset;
ret = pci_epf_align_inbound_addr(epf, db_bar->barno,
((u64)msg->address_hi << 32) | msg->address_lo,
&addr, &offset);
if (ret) {
ntb->msi_doorbell = false;
goto err_free_irq;
}
ntb->reg->db_data[i] = msg->data;
ntb->reg->db_offset[i] = offset;
}
ntb->reg->db_entry_size = 0;
ntb->msi_doorbell = true;
return 0;
err_free_irq:
for (i--; i >= 0; i--)
free_irq(epf->db_msg[i].virq, ntb);
pci_epf_free_doorbell(ntb->epf);
return ret;
}
/**
* epf_ntb_db_bar_init() - Configure Doorbell window BARs
* @ntb: NTB device that facilitates communication between HOST and VHOST
@@ -520,21 +625,25 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
ntb->epf->func_no,
ntb->epf->vfunc_no);
barno = ntb->epf_ntb_bar[BAR_DB];
mw_addr = pci_epf_alloc_space(ntb->epf, size, barno, epc_features, 0);
if (!mw_addr) {
dev_err(dev, "Failed to allocate OB address\n");
return -ENOMEM;
}
ntb->epf_db = mw_addr;
epf_bar = &ntb->epf->bar[barno];
ret = pci_epc_set_bar(ntb->epf->epc, ntb->epf->func_no, ntb->epf->vfunc_no, epf_bar);
ret = epf_ntb_db_bar_init_msi_doorbell(ntb, epf_bar, epc_features, barno);
if (ret) {
dev_err(dev, "Doorbell BAR set failed\n");
/* fall back to polling mode */
mw_addr = pci_epf_alloc_space(ntb->epf, size, barno, epc_features, 0);
if (!mw_addr) {
dev_err(dev, "Failed to allocate OB address\n");
return -ENOMEM;
}
ntb->epf_db = mw_addr;
ret = pci_epc_set_bar(ntb->epf->epc, ntb->epf->func_no,
ntb->epf->vfunc_no, epf_bar);
if (ret) {
dev_err(dev, "Doorbell BAR set failed\n");
goto err_alloc_peer_mem;
}
}
return ret;
@@ -554,6 +663,16 @@ static void epf_ntb_db_bar_clear(struct epf_ntb *ntb)
{
enum pci_barno barno;
if (ntb->msi_doorbell) {
int i;
for (i = 0; i < ntb->db_count; i++)
free_irq(ntb->epf->db_msg[i].virq, ntb);
}
if (ntb->epf->db_msg)
pci_epf_free_doorbell(ntb->epf);
barno = ntb->epf_ntb_bar[BAR_DB];
pci_epf_free_space(ntb->epf, ntb->epf_db, barno, 0);
pci_epc_clear_bar(ntb->epf->epc,
@@ -1268,7 +1387,7 @@ static u64 vntb_epf_db_read(struct ntb_dev *ndev)
{
struct epf_ntb *ntb = ntb_ndev(ndev);
return ntb->db;
return atomic64_read(&ntb->db);
}
static int vntb_epf_mw_get_align(struct ntb_dev *ndev, int pidx, int idx,
@@ -1308,7 +1427,7 @@ static int vntb_epf_db_clear(struct ntb_dev *ndev, u64 db_bits)
{
struct epf_ntb *ntb = ntb_ndev(ndev);
ntb->db &= ~db_bits;
atomic64_and(~db_bits, &ntb->db);
return 0;
}

View File

@@ -208,6 +208,48 @@ void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
}
EXPORT_SYMBOL_GPL(pci_epf_remove_vepf);
static int pci_epf_get_required_bar_size(struct pci_epf *epf, size_t *bar_size,
size_t *aligned_mem_size,
enum pci_barno bar,
const struct pci_epc_features *epc_features,
enum pci_epc_interface_type type)
{
u64 bar_fixed_size = epc_features->bar[bar].fixed_size;
size_t align = epc_features->align;
size_t size = *bar_size;
if (size < 128)
size = 128;
/* According to PCIe base spec, min size for a resizable BAR is 1 MB. */
if (epc_features->bar[bar].type == BAR_RESIZABLE && size < SZ_1M)
size = SZ_1M;
if (epc_features->bar[bar].type == BAR_FIXED && bar_fixed_size) {
if (size > bar_fixed_size) {
dev_err(&epf->dev,
"requested BAR size is larger than fixed size\n");
return -ENOMEM;
}
size = bar_fixed_size;
} else {
/* BAR size must be power of two */
size = roundup_pow_of_two(size);
}
*bar_size = size;
/*
* The EPC's BAR start address must meet alignment requirements. In most
* cases, the alignment will match the BAR size. However, differences
* can occurfor example, when the fixed BAR size (e.g., 128 bytes) is
* smaller than the required alignment (e.g., 4 KB).
*/
*aligned_mem_size = align ? ALIGN(size, align) : size;
return 0;
}
/**
* pci_epf_free_space() - free the allocated PCI EPF register space
* @epf: the EPF device from whom to free the memory
@@ -236,13 +278,13 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
}
dev = epc->dev.parent;
dma_free_coherent(dev, epf_bar[bar].aligned_size, addr,
dma_free_coherent(dev, epf_bar[bar].mem_size, addr,
epf_bar[bar].phys_addr);
epf_bar[bar].phys_addr = 0;
epf_bar[bar].addr = NULL;
epf_bar[bar].size = 0;
epf_bar[bar].aligned_size = 0;
epf_bar[bar].mem_size = 0;
epf_bar[bar].barno = 0;
epf_bar[bar].flags = 0;
}
@@ -264,40 +306,16 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
const struct pci_epc_features *epc_features,
enum pci_epc_interface_type type)
{
u64 bar_fixed_size = epc_features->bar[bar].fixed_size;
size_t aligned_size, align = epc_features->align;
struct pci_epf_bar *epf_bar;
dma_addr_t phys_addr;
struct pci_epc *epc;
struct device *dev;
size_t mem_size;
void *space;
if (size < 128)
size = 128;
/* According to PCIe base spec, min size for a resizable BAR is 1 MB. */
if (epc_features->bar[bar].type == BAR_RESIZABLE && size < SZ_1M)
size = SZ_1M;
if (epc_features->bar[bar].type == BAR_FIXED && bar_fixed_size) {
if (size > bar_fixed_size) {
dev_err(&epf->dev,
"requested BAR size is larger than fixed size\n");
return NULL;
}
size = bar_fixed_size;
} else {
/* BAR size must be power of two */
size = roundup_pow_of_two(size);
}
/*
* Allocate enough memory to accommodate the iATU alignment
* requirement. In most cases, this will be the same as .size but
* it might be different if, for example, the fixed size of a BAR
* is smaller than align.
*/
aligned_size = align ? ALIGN(size, align) : size;
if (pci_epf_get_required_bar_size(epf, &size, &mem_size, bar,
epc_features, type))
return NULL;
if (type == PRIMARY_INTERFACE) {
epc = epf->epc;
@@ -308,7 +326,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
}
dev = epc->dev.parent;
space = dma_alloc_coherent(dev, aligned_size, &phys_addr, GFP_KERNEL);
space = dma_alloc_coherent(dev, mem_size, &phys_addr, GFP_KERNEL);
if (!space) {
dev_err(dev, "failed to allocate mem space\n");
return NULL;
@@ -317,7 +335,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
epf_bar[bar].phys_addr = phys_addr;
epf_bar[bar].addr = space;
epf_bar[bar].size = size;
epf_bar[bar].aligned_size = aligned_size;
epf_bar[bar].mem_size = mem_size;
epf_bar[bar].barno = bar;
if (upper_32_bits(size) || epc_features->bar[bar].only_64bit)
epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
@@ -328,6 +346,83 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
}
EXPORT_SYMBOL_GPL(pci_epf_alloc_space);
/**
* pci_epf_assign_bar_space() - Assign PCI EPF BAR space
* @epf: EPF device to assign the BAR memory
* @size: Size of the memory that has to be assigned
* @bar: BAR number for which the memory is assigned
* @epc_features: Features provided by the EPC specific to this EPF
* @type: Identifies if the assignment is for primary EPC or secondary EPC
* @bar_addr: Address to be assigned for the @bar
*
* Invoke to assign memory for the PCI EPF BAR.
* Flag PCI_BASE_ADDRESS_MEM_TYPE_64 will automatically get set if the BAR
* can only be a 64-bit BAR, or if the requested size is larger than 2 GB.
*/
int pci_epf_assign_bar_space(struct pci_epf *epf, size_t size,
enum pci_barno bar,
const struct pci_epc_features *epc_features,
enum pci_epc_interface_type type,
dma_addr_t bar_addr)
{
size_t bar_size, aligned_mem_size;
struct pci_epf_bar *epf_bar;
dma_addr_t limit;
int pos;
if (!size)
return -EINVAL;
limit = bar_addr + size - 1;
/*
* Bits: 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
* bar_addr: U U U U U U 0 X X X X X X X X X
* limit: U U U U U U 1 X X X X X X X X X
*
* bar_addr^limit 0 0 0 0 0 0 1 X X X X X X X X X
*
* U: unchanged address bits in range [bar_addr, limit]
* X: bit 0 or 1
*
* (bar_addr^limit) & BIT_ULL(pos) will find the first set bit from MSB
* (pos). And value of (2 ^ pos) should be able to cover the BAR range.
*/
for (pos = 8 * sizeof(dma_addr_t) - 1; pos > 0; pos--)
if ((limit ^ bar_addr) & BIT_ULL(pos))
break;
if (pos == 8 * sizeof(dma_addr_t) - 1)
return -EINVAL;
bar_size = BIT_ULL(pos + 1);
if (pci_epf_get_required_bar_size(epf, &bar_size, &aligned_mem_size,
bar, epc_features, type))
return -ENOMEM;
if (type == PRIMARY_INTERFACE)
epf_bar = epf->bar;
else
epf_bar = epf->sec_epc_bar;
epf_bar[bar].phys_addr = ALIGN_DOWN(bar_addr, aligned_mem_size);
if (epf_bar[bar].phys_addr + bar_size < limit)
return -ENOMEM;
epf_bar[bar].addr = NULL;
epf_bar[bar].size = bar_size;
epf_bar[bar].mem_size = aligned_mem_size;
epf_bar[bar].barno = bar;
if (upper_32_bits(size) || epc_features->bar[bar].only_64bit)
epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
else
epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_32;
return 0;
}
EXPORT_SYMBOL_GPL(pci_epf_assign_bar_space);
static void pci_epf_remove_cfs(struct pci_epf_driver *driver)
{
struct config_group *group, *tmp;

Some files were not shown because too many files have changed in this diff Show More