From a3b02a9c6591ce154cd44e2383406390a45b530c Mon Sep 17 00:00:00 2001 From: Peter Rosin Date: Sun, 14 May 2017 21:51:06 +0200 Subject: mux: minimal mux subsystem Add a new minimalistic subsystem that handles multiplexer controllers. When multiplexers are used in various places in the kernel, and the same multiplexer controller can be used for several independent things, there should be one place to implement support for said multiplexer controller. A single multiplexer controller can also be used to control several parallel multiplexers, that are in turn used by different subsystems in the kernel, leading to a need to coordinate multiplexer accesses. The multiplexer subsystem handles this coordination. Thanks go out to Lars-Peter Clausen, Jonathan Cameron, Rob Herring, Wolfram Sang, Paul Gortmaker, Dan Carpenter, Colin Ian King, Greg Kroah-Hartman and last but certainly not least to Philipp Zabel for helpful comments, reviews, patches and general encouragement! Reviewed-by: Jonathan Cameron Signed-off-by: Peter Rosin Reviewed-by: Philipp Zabel Tested-by: Philipp Zabel Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-class-mux | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 Documentation/ABI/testing/sysfs-class-mux (limited to 'Documentation/ABI') diff --git a/Documentation/ABI/testing/sysfs-class-mux b/Documentation/ABI/testing/sysfs-class-mux new file mode 100644 index 000000000000..8715f9c7bd4f --- /dev/null +++ b/Documentation/ABI/testing/sysfs-class-mux @@ -0,0 +1,16 @@ +What: /sys/class/mux/ +Date: April 2017 +KernelVersion: 4.13 +Contact: Peter Rosin +Description: + The mux/ class sub-directory belongs to the Generic MUX + Framework and provides a sysfs interface for using MUX + controllers. + +What: /sys/class/mux/muxchipN/ +Date: April 2017 +KernelVersion: 4.13 +Contact: Peter Rosin +Description: + A /sys/class/mux/muxchipN directory is created for each + probed MUX chip where N is a simple enumeration. -- cgit v1.2.3 From bfe778ac49826ced3dceb6416038e1cd887ce2bd Mon Sep 17 00:00:00 2001 From: Mika Westerberg Date: Tue, 6 Jun 2017 15:25:01 +0300 Subject: thunderbolt: Convert switch to a device Thunderbolt domain consists of switches that are connected to each other, forming a bus. This will convert each switch into a real Linux device structure and adds them to the domain. The advantage here is that we get all the goodies from the driver core, like reference counting and sysfs hierarchy for free. Also expose device identification information to the userspace via new sysfs attributes. In order to support internal connection manager (ICM) we separate switch configuration into its own function (tb_switch_configure()) which is only called by the existing native connection manager implementation used on Macs. Signed-off-by: Mika Westerberg Reviewed-by: Yehezkel Bernat Reviewed-by: Michael Jamet Reviewed-by: Andy Shevchenko Signed-off-by: Andreas Noever Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-bus-thunderbolt | 22 ++ drivers/thunderbolt/eeprom.c | 2 + drivers/thunderbolt/switch.c | 261 +++++++++++++++++++----- drivers/thunderbolt/tb.c | 40 +++- drivers/thunderbolt/tb.h | 45 +++- 5 files changed, 303 insertions(+), 67 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-bus-thunderbolt (limited to 'Documentation/ABI') diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt new file mode 100644 index 000000000000..9f1bd0086938 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -0,0 +1,22 @@ +What: /sys/bus/thunderbolt/devices/.../device +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute contains id of this device extracted from + the device DROM. + +What: /sys/bus/thunderbolt/devices/.../vendor +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute contains vendor id of this device extracted + from the device DROM. + +What: /sys/bus/thunderbolt/devices/.../unique_id +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute contains unique_id string of this device. + This is either read from hardware registers (UUID on + newer hardware) or based on UID from the device DROM. + Can be used to uniquely identify particular device. diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index eb2179c98b09..7e485e3ef27e 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -479,6 +479,8 @@ parse: goto err; } sw->uid = header->uid; + sw->vendor = header->vendor_id; + sw->device = header->model_id; crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); if (crc != header->data_crc32) { diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index b379b4183bac..0475bfc6c208 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -281,6 +281,9 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active) u32 data; int res; + if (!sw->config.enabled) + return 0; + sw->config.plug_events_delay = 0xff; res = tb_sw_write(sw, ((u32 *) &sw->config) + 4, TB_CFG_SWITCH, 4, 1); if (res) @@ -307,36 +310,79 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active) sw->cap_plug_events + 1, 1); } +static ssize_t device_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); -/** - * tb_switch_free() - free a tb_switch and all downstream switches - */ -void tb_switch_free(struct tb_switch *sw) + return sprintf(buf, "%#x\n", sw->device); +} +static DEVICE_ATTR_RO(device); + +static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, + char *buf) { - int i; - /* port 0 is the switch itself and never has a remote */ - for (i = 1; i <= sw->config.max_port_number; i++) { - if (tb_is_upstream_port(&sw->ports[i])) - continue; - if (sw->ports[i].remote) - tb_switch_free(sw->ports[i].remote->sw); - sw->ports[i].remote = NULL; - } + struct tb_switch *sw = tb_to_switch(dev); - if (!sw->is_unplugged) - tb_plug_events_active(sw, false); + return sprintf(buf, "%#x\n", sw->vendor); +} +static DEVICE_ATTR_RO(vendor); + +static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + + return sprintf(buf, "%pUb\n", sw->uuid); +} +static DEVICE_ATTR_RO(unique_id); + +static struct attribute *switch_attrs[] = { + &dev_attr_device.attr, + &dev_attr_vendor.attr, + &dev_attr_unique_id.attr, + NULL, +}; + +static struct attribute_group switch_group = { + .attrs = switch_attrs, +}; +static const struct attribute_group *switch_groups[] = { + &switch_group, + NULL, +}; + +static void tb_switch_release(struct device *dev) +{ + struct tb_switch *sw = tb_to_switch(dev); + + kfree(sw->uuid); kfree(sw->ports); kfree(sw->drom); kfree(sw); } +struct device_type tb_switch_type = { + .name = "thunderbolt_device", + .release = tb_switch_release, +}; + /** - * tb_switch_alloc() - allocate and initialize a switch + * tb_switch_alloc() - allocate a switch + * @tb: Pointer to the owning domain + * @parent: Parent device for this switch + * @route: Route string for this switch * - * Return: Returns a NULL on failure. + * Allocates and initializes a switch. Will not upload configuration to + * the switch. For that you need to call tb_switch_configure() + * separately. The returned switch should be released by calling + * tb_switch_put(). + * + * Return: Pointer to the allocated switch or %NULL in case of failure */ -struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route) +struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, + u64 route) { int i; int cap; @@ -351,11 +397,9 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route) sw->tb = tb; if (tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5)) - goto err; - tb_info(tb, - "initializing Switch at %#llx (depth: %d, up port: %d)\n", - route, tb_route_length(route), upstream_port); - tb_info(tb, "old switch config:\n"); + goto err_free_sw_ports; + + tb_info(tb, "current switch config:\n"); tb_dump_switch(tb, &sw->config); /* configure switch */ @@ -363,30 +407,13 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route) sw->config.depth = tb_route_length(route); sw->config.route_lo = route; sw->config.route_hi = route >> 32; - sw->config.enabled = 1; - /* from here on we may use the tb_sw_* functions & macros */ - - if (sw->config.vendor_id != 0x8086) - tb_sw_warn(sw, "unknown switch vendor id %#x\n", - sw->config.vendor_id); - - if (sw->config.device_id != PCI_DEVICE_ID_INTEL_LIGHT_RIDGE && - sw->config.device_id != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C && - sw->config.device_id != PCI_DEVICE_ID_INTEL_PORT_RIDGE && - sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE && - sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE) - tb_sw_warn(sw, "unsupported switch device id %#x\n", - sw->config.device_id); - - /* upload configuration */ - if (tb_sw_write(sw, 1 + (u32 *) &sw->config, TB_CFG_SWITCH, 1, 3)) - goto err; + sw->config.enabled = 0; /* initialize ports */ sw->ports = kcalloc(sw->config.max_port_number + 1, sizeof(*sw->ports), GFP_KERNEL); if (!sw->ports) - goto err; + goto err_free_sw_ports; for (i = 0; i <= sw->config.max_port_number; i++) { /* minimum setup for tb_find_cap and tb_drom_read to work */ @@ -397,35 +424,161 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route) cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS); if (cap < 0) { tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n"); - goto err; + goto err_free_sw_ports; } sw->cap_plug_events = cap; + device_initialize(&sw->dev); + sw->dev.parent = parent; + sw->dev.bus = &tb_bus_type; + sw->dev.type = &tb_switch_type; + sw->dev.groups = switch_groups; + dev_set_name(&sw->dev, "%u-%llx", tb->index, tb_route(sw)); + + return sw; + +err_free_sw_ports: + kfree(sw->ports); + kfree(sw); + + return NULL; +} + +/** + * tb_switch_configure() - Uploads configuration to the switch + * @sw: Switch to configure + * + * Call this function before the switch is added to the system. It will + * upload configuration to the switch and makes it available for the + * connection manager to use. + * + * Return: %0 in case of success and negative errno in case of failure + */ +int tb_switch_configure(struct tb_switch *sw) +{ + struct tb *tb = sw->tb; + u64 route; + int ret; + + route = tb_route(sw); + tb_info(tb, + "initializing Switch at %#llx (depth: %d, up port: %d)\n", + route, tb_route_length(route), sw->config.upstream_port_number); + + if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL) + tb_sw_warn(sw, "unknown switch vendor id %#x\n", + sw->config.vendor_id); + + if (sw->config.device_id != PCI_DEVICE_ID_INTEL_LIGHT_RIDGE && + sw->config.device_id != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C && + sw->config.device_id != PCI_DEVICE_ID_INTEL_PORT_RIDGE && + sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE && + sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE) + tb_sw_warn(sw, "unsupported switch device id %#x\n", + sw->config.device_id); + + sw->config.enabled = 1; + + /* upload configuration */ + ret = tb_sw_write(sw, 1 + (u32 *)&sw->config, TB_CFG_SWITCH, 1, 3); + if (ret) + return ret; + + return tb_plug_events_active(sw, true); +} + +static void tb_switch_set_uuid(struct tb_switch *sw) +{ + u32 uuid[4]; + int cap; + + if (sw->uuid) + return; + + /* + * The newer controllers include fused UUID as part of link + * controller specific registers + */ + cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER); + if (cap > 0) { + tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4); + } else { + /* + * ICM generates UUID based on UID and fills the upper + * two words with ones. This is not strictly following + * UUID format but we want to be compatible with it so + * we do the same here. + */ + uuid[0] = sw->uid & 0xffffffff; + uuid[1] = (sw->uid >> 32) & 0xffffffff; + uuid[2] = 0xffffffff; + uuid[3] = 0xffffffff; + } + + sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL); +} + +/** + * tb_switch_add() - Add a switch to the domain + * @sw: Switch to add + * + * This is the last step in adding switch to the domain. It will read + * identification information from DROM and initializes ports so that + * they can be used to connect other switches. The switch will be + * exposed to the userspace when this function successfully returns. To + * remove and release the switch, call tb_switch_remove(). + * + * Return: %0 in case of success and negative errno in case of failure + */ +int tb_switch_add(struct tb_switch *sw) +{ + int i, ret; + /* read drom */ if (tb_drom_read(sw)) tb_sw_warn(sw, "tb_eeprom_read_rom failed, continuing\n"); tb_sw_info(sw, "uid: %#llx\n", sw->uid); + tb_switch_set_uuid(sw); + for (i = 0; i <= sw->config.max_port_number; i++) { if (sw->ports[i].disabled) { tb_port_info(&sw->ports[i], "disabled by eeprom\n"); continue; } - if (tb_init_port(&sw->ports[i])) - goto err; + ret = tb_init_port(&sw->ports[i]); + if (ret) + return ret; } - /* TODO: I2C, IECS, link controller */ + return device_add(&sw->dev); +} - if (tb_plug_events_active(sw, true)) - goto err; +/** + * tb_switch_remove() - Remove and release a switch + * @sw: Switch to remove + * + * This will remove the switch from the domain and release it after last + * reference count drops to zero. If there are switches connected below + * this switch, they will be removed as well. + */ +void tb_switch_remove(struct tb_switch *sw) +{ + int i; - return sw; -err: - kfree(sw->ports); - kfree(sw->drom); - kfree(sw); - return NULL; + /* port 0 is the switch itself and never has a remote */ + for (i = 1; i <= sw->config.max_port_number; i++) { + if (tb_is_upstream_port(&sw->ports[i])) + continue; + if (sw->ports[i].remote) + tb_switch_remove(sw->ports[i].remote->sw); + sw->ports[i].remote = NULL; + } + + if (!sw->is_unplugged) + tb_plug_events_active(sw, false); + + device_unregister(&sw->dev); } /** diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 9f00a0f28d53..94ecac012428 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -61,9 +61,21 @@ static void tb_scan_port(struct tb_port *port) tb_port_WARN(port, "port already has a remote!\n"); return; } - sw = tb_switch_alloc(port->sw->tb, tb_downstream_route(port)); + sw = tb_switch_alloc(port->sw->tb, &port->sw->dev, + tb_downstream_route(port)); if (!sw) return; + + if (tb_switch_configure(sw)) { + tb_switch_put(sw); + return; + } + + if (tb_switch_add(sw)) { + tb_switch_put(sw); + return; + } + port->remote = tb_upstream_port(sw); tb_upstream_port(sw)->remote = port; tb_scan_switch(sw); @@ -100,7 +112,7 @@ static void tb_free_unplugged_children(struct tb_switch *sw) if (!port->remote) continue; if (port->remote->sw->is_unplugged) { - tb_switch_free(port->remote->sw); + tb_switch_remove(port->remote->sw); port->remote = NULL; } else { tb_free_unplugged_children(port->remote->sw); @@ -266,7 +278,7 @@ static void tb_handle_hotplug(struct work_struct *work) tb_port_info(port, "unplugged\n"); tb_sw_set_unplugged(port->remote->sw); tb_free_invalid_tunnels(tb); - tb_switch_free(port->remote->sw); + tb_switch_remove(port->remote->sw); port->remote = NULL; } else { tb_port_info(port, @@ -325,22 +337,32 @@ static void tb_stop(struct tb *tb) tb_pci_deactivate(tunnel); tb_pci_free(tunnel); } - - if (tb->root_switch) - tb_switch_free(tb->root_switch); - tb->root_switch = NULL; - + tb_switch_remove(tb->root_switch); tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */ } static int tb_start(struct tb *tb) { struct tb_cm *tcm = tb_priv(tb); + int ret; - tb->root_switch = tb_switch_alloc(tb, 0); + tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); if (!tb->root_switch) return -ENOMEM; + ret = tb_switch_configure(tb->root_switch); + if (ret) { + tb_switch_put(tb->root_switch); + return ret; + } + + /* Announce the switch to the world */ + ret = tb_switch_add(tb->root_switch); + if (ret) { + tb_switch_put(tb->root_switch); + return ret; + } + /* Full scan to discover devices added before the driver was loaded. */ tb_scan_switch(tb->root_switch); tb_activate_pcie_devices(tb); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 5fab4c44f124..f7dfe733d71a 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -8,20 +8,36 @@ #define TB_H_ #include +#include #include "tb_regs.h" #include "ctl.h" /** * struct tb_switch - a thunderbolt switch + * @dev: Device for the switch + * @config: Switch configuration + * @ports: Ports in this switch + * @tb: Pointer to the domain the switch belongs to + * @uid: Unique ID of the switch + * @uuid: UUID of the switch (or %NULL if not supported) + * @vendor: Vendor ID of the switch + * @device: Device ID of the switch + * @cap_plug_events: Offset to the plug events capability (%0 if not found) + * @is_unplugged: The switch is going away + * @drom: DROM of the switch (%NULL if not found) */ struct tb_switch { + struct device dev; struct tb_regs_switch_header config; struct tb_port *ports; struct tb *tb; u64 uid; - int cap_plug_events; /* offset, zero if not found */ - bool is_unplugged; /* unplugged, will go away */ + uuid_be *uuid; + u16 vendor; + u16 device; + int cap_plug_events; + bool is_unplugged; u8 *drom; }; @@ -242,6 +258,7 @@ struct tb *tb_probe(struct tb_nhi *nhi); extern struct bus_type tb_bus_type; extern struct device_type tb_domain_type; +extern struct device_type tb_switch_type; int tb_domain_init(void); void tb_domain_exit(void); @@ -257,14 +274,34 @@ static inline void tb_domain_put(struct tb *tb) put_device(&tb->dev); } -struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route); -void tb_switch_free(struct tb_switch *sw); +struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, + u64 route); +int tb_switch_configure(struct tb_switch *sw); +int tb_switch_add(struct tb_switch *sw); +void tb_switch_remove(struct tb_switch *sw); void tb_switch_suspend(struct tb_switch *sw); int tb_switch_resume(struct tb_switch *sw); int tb_switch_reset(struct tb *tb, u64 route); void tb_sw_set_unplugged(struct tb_switch *sw); struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route); +static inline void tb_switch_put(struct tb_switch *sw) +{ + put_device(&sw->dev); +} + +static inline bool tb_is_switch(const struct device *dev) +{ + return dev->type == &tb_switch_type; +} + +static inline struct tb_switch *tb_to_switch(struct device *dev) +{ + if (tb_is_switch(dev)) + return container_of(dev, struct tb_switch, dev); + return NULL; +} + int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); int tb_port_add_nfc_credits(struct tb_port *port, int credits); int tb_port_clear_counter(struct tb_port *port, int counter); -- cgit v1.2.3 From 72ee33907b629355d8fd1980140a467041a9f519 Mon Sep 17 00:00:00 2001 From: Mika Westerberg Date: Tue, 6 Jun 2017 15:25:05 +0300 Subject: thunderbolt: Read vendor and device name from DROM The device DROM contains name of the vendor and device among other things. Extract this information and expose it to the userspace via two new attributes. Signed-off-by: Mika Westerberg Reviewed-by: Yehezkel Bernat Reviewed-by: Michael Jamet Signed-off-by: Andreas Noever Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-bus-thunderbolt | 14 +++++++++++ drivers/thunderbolt/eeprom.c | 32 +++++++++++++++++++++++++ drivers/thunderbolt/switch.c | 22 +++++++++++++++++ drivers/thunderbolt/tb.h | 4 ++++ 4 files changed, 72 insertions(+) (limited to 'Documentation/ABI') diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index 9f1bd0086938..29a516f53d2c 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -5,6 +5,13 @@ Contact: thunderbolt-software@lists.01.org Description: This attribute contains id of this device extracted from the device DROM. +What: /sys/bus/thunderbolt/devices/.../device_name +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute contains name of this device extracted from + the device DROM. + What: /sys/bus/thunderbolt/devices/.../vendor Date: Sep 2017 KernelVersion: 4.13 @@ -12,6 +19,13 @@ Contact: thunderbolt-software@lists.01.org Description: This attribute contains vendor id of this device extracted from the device DROM. +What: /sys/bus/thunderbolt/devices/.../vendor_name +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute contains vendor name of this device extracted + from the device DROM. + What: /sys/bus/thunderbolt/devices/.../unique_id Date: Sep 2017 KernelVersion: 4.13 diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index 5c7d80a109b1..d40a5f07fc4c 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -204,6 +204,11 @@ struct tb_drom_entry_header { enum tb_drom_entry_type type:1; } __packed; +struct tb_drom_entry_generic { + struct tb_drom_entry_header header; + u8 data[0]; +} __packed; + struct tb_drom_entry_port { /* BYTES 0-1 */ struct tb_drom_entry_header header; @@ -295,6 +300,32 @@ int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid) return 0; } +static int tb_drom_parse_entry_generic(struct tb_switch *sw, + struct tb_drom_entry_header *header) +{ + const struct tb_drom_entry_generic *entry = + (const struct tb_drom_entry_generic *)header; + + switch (header->index) { + case 1: + /* Length includes 2 bytes header so remove it before copy */ + sw->vendor_name = kstrndup(entry->data, + header->len - sizeof(*header), GFP_KERNEL); + if (!sw->vendor_name) + return -ENOMEM; + break; + + case 2: + sw->device_name = kstrndup(entry->data, + header->len - sizeof(*header), GFP_KERNEL); + if (!sw->device_name) + return -ENOMEM; + break; + } + + return 0; +} + static int tb_drom_parse_entry_port(struct tb_switch *sw, struct tb_drom_entry_header *header) { @@ -350,6 +381,7 @@ static int tb_drom_parse_entries(struct tb_switch *sw) switch (entry->type) { case TB_DROM_ENTRY_GENERIC: + res = tb_drom_parse_entry_generic(sw, entry); break; case TB_DROM_ENTRY_PORT: res = tb_drom_parse_entry_port(sw, entry); diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 2390f08b94da..11f16a141686 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -319,6 +319,15 @@ static ssize_t device_show(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RO(device); +static ssize_t +device_name_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + + return sprintf(buf, "%s\n", sw->device_name ? sw->device_name : ""); +} +static DEVICE_ATTR_RO(device_name); + static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -328,6 +337,15 @@ static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RO(vendor); +static ssize_t +vendor_name_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + + return sprintf(buf, "%s\n", sw->vendor_name ? sw->vendor_name : ""); +} +static DEVICE_ATTR_RO(vendor_name); + static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -339,7 +357,9 @@ static DEVICE_ATTR_RO(unique_id); static struct attribute *switch_attrs[] = { &dev_attr_device.attr, + &dev_attr_device_name.attr, &dev_attr_vendor.attr, + &dev_attr_vendor_name.attr, &dev_attr_unique_id.attr, NULL, }; @@ -358,6 +378,8 @@ static void tb_switch_release(struct device *dev) struct tb_switch *sw = tb_to_switch(dev); kfree(sw->uuid); + kfree(sw->device_name); + kfree(sw->vendor_name); kfree(sw->ports); kfree(sw->drom); kfree(sw); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index f7dfe733d71a..6d4910ef2eb9 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -23,6 +23,8 @@ * @uuid: UUID of the switch (or %NULL if not supported) * @vendor: Vendor ID of the switch * @device: Device ID of the switch + * @vendor_name: Name of the vendor (or %NULL if not known) + * @device_name: Name of the device (or %NULL if not known) * @cap_plug_events: Offset to the plug events capability (%0 if not found) * @is_unplugged: The switch is going away * @drom: DROM of the switch (%NULL if not found) @@ -36,6 +38,8 @@ struct tb_switch { uuid_be *uuid; u16 vendor; u16 device; + const char *vendor_name; + const char *device_name; int cap_plug_events; bool is_unplugged; u8 *drom; -- cgit v1.2.3 From f67cf491175a315ca86c9b349708bfed7b1f40c1 Mon Sep 17 00:00:00 2001 From: Mika Westerberg Date: Tue, 6 Jun 2017 15:25:16 +0300 Subject: thunderbolt: Add support for Internal Connection Manager (ICM) Starting from Intel Falcon Ridge the internal connection manager running on the Thunderbolt host controller has been supporting 4 security levels. One reason for this is to prevent DMA attacks and only allow connecting devices the user trusts. The internal connection manager (ICM) is the preferred way of connecting Thunderbolt devices over software only implementation typically used on Macs. The driver communicates with ICM using special Thunderbolt ring 0 (control channel) messages. In order to handle these messages we add support for the ICM messages to the control channel. The security levels are as follows: none - No security, all tunnels are created automatically user - User needs to approve the device before tunnels are created secure - User need to approve the device before tunnels are created. The device is sent a challenge on future connects to be able to verify it is actually the approved device. dponly - Only Display Port and USB tunnels can be created and those are created automatically. The security levels are typically configurable from the system BIOS and by default it is set to "user" on many systems. In this patch each Thunderbolt device will have either one or two new sysfs attributes: authorized and key. The latter appears for devices that support secure connect. In order to identify the device the user can read identication information, including UUID and name of the device from sysfs and based on that make a decision to authorize the device. The device is authorized by simply writing 1 to the "authorized" sysfs attribute. This is following the USB bus device authorization mechanism. The secure connect requires an additional challenge step (writing 2 to the "authorized" attribute) in future connects when the key has already been stored to the NVM of the device. Non-ICM systems (before Alpine Ridge) continue to use the existing functionality and the security level is set to none. For systems with Alpine Ridge, even on Apple hardware, we will use ICM. This code is based on the work done by Amir Levy and Michael Jamet. Signed-off-by: Michael Jamet Signed-off-by: Mika Westerberg Reviewed-by: Yehezkel Bernat Reviewed-by: Andy Shevchenko Signed-off-by: Andreas Noever Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-bus-thunderbolt | 48 + drivers/thunderbolt/Kconfig | 12 +- drivers/thunderbolt/Makefile | 2 +- drivers/thunderbolt/ctl.c | 2 + drivers/thunderbolt/domain.c | 195 +++++ drivers/thunderbolt/icm.c | 1058 +++++++++++++++++++++++ drivers/thunderbolt/nhi.c | 33 +- drivers/thunderbolt/nhi_regs.h | 7 + drivers/thunderbolt/switch.c | 222 +++++ drivers/thunderbolt/tb.c | 7 + drivers/thunderbolt/tb.h | 79 ++ drivers/thunderbolt/tb_msgs.h | 152 ++++ 12 files changed, 1805 insertions(+), 12 deletions(-) create mode 100644 drivers/thunderbolt/icm.c (limited to 'Documentation/ABI') diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index 29a516f53d2c..05b7f9a6431f 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -1,3 +1,51 @@ +What: /sys/bus/thunderbolt/devices/.../domainX/security +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute holds current Thunderbolt security level + set by the system BIOS. Possible values are: + + none: All devices are automatically authorized + user: Devices are only authorized based on writing + appropriate value to the authorized attribute + secure: Require devices that support secure connect at + minimum. User needs to authorize each device. + dponly: Automatically tunnel Display port (and USB). No + PCIe tunnels are created. + +What: /sys/bus/thunderbolt/devices/.../authorized +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: This attribute is used to authorize Thunderbolt devices + after they have been connected. If the device is not + authorized, no devices such as PCIe and Display port are + available to the system. + + Contents of this attribute will be 0 when the device is not + yet authorized. + + Possible values are supported: + 1: The device will be authorized and connected + + When key attribute contains 32 byte hex string the possible + values are: + 1: The 32 byte hex string is added to the device NVM and + the device is authorized. + 2: Send a challenge based on the 32 byte hex string. If the + challenge response from device is valid, the device is + authorized. In case of failure errno will be ENOKEY if + the device did not contain a key at all, and + EKEYREJECTED if the challenge response did not match. + +What: /sys/bus/thunderbolt/devices/.../key +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: When a devices supports Thunderbolt secure connect it will + have this attribute. Writing 32 byte hex string changes + authorization to use the secure connection method instead. + What: /sys/bus/thunderbolt/devices/.../device Date: Sep 2017 KernelVersion: 4.13 diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig index d35db16aa43f..a9cc724985ad 100644 --- a/drivers/thunderbolt/Kconfig +++ b/drivers/thunderbolt/Kconfig @@ -1,15 +1,15 @@ menuconfig THUNDERBOLT - tristate "Thunderbolt support for Apple devices" + tristate "Thunderbolt support" depends on PCI depends on X86 || COMPILE_TEST select APPLE_PROPERTIES if EFI_STUB && X86 select CRC32 + select CRYPTO + select CRYPTO_HASH help - Cactus Ridge Thunderbolt Controller driver - This driver is required if you want to hotplug Thunderbolt devices on - Apple hardware. - - Device chaining is currently not supported. + Thunderbolt Controller driver. This driver is required if you + want to hotplug Thunderbolt devices on Apple hardware or on PCs + with Intel Falcon Ridge or newer. To compile this driver a module, choose M here. The module will be called thunderbolt. diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile index 9828e862dd35..4900febc6c8a 100644 --- a/drivers/thunderbolt/Makefile +++ b/drivers/thunderbolt/Makefile @@ -1,3 +1,3 @@ obj-${CONFIG_THUNDERBOLT} := thunderbolt.o thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o -thunderbolt-objs += domain.o dma_port.o +thunderbolt-objs += domain.o dma_port.o icm.o diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index 27c30ff79a84..69c0232a22f8 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -463,6 +463,8 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame, "RX: checksum mismatch, dropping packet\n"); goto rx; } + /* Fall through */ + case TB_CFG_PKG_ICM_EVENT: tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size); goto rx; diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index 54bc15f9bf6b..f71b63e90016 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -13,11 +13,43 @@ #include #include #include +#include +#include #include "tb.h" static DEFINE_IDA(tb_domain_ida); +static const char * const tb_security_names[] = { + [TB_SECURITY_NONE] = "none", + [TB_SECURITY_USER] = "user", + [TB_SECURITY_SECURE] = "secure", + [TB_SECURITY_DPONLY] = "dponly", +}; + +static ssize_t security_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb *tb = container_of(dev, struct tb, dev); + + return sprintf(buf, "%s\n", tb_security_names[tb->security_level]); +} +static DEVICE_ATTR_RO(security); + +static struct attribute *domain_attrs[] = { + &dev_attr_security.attr, + NULL, +}; + +static struct attribute_group domain_attr_group = { + .attrs = domain_attrs, +}; + +static const struct attribute_group *domain_attr_groups[] = { + &domain_attr_group, + NULL, +}; + struct bus_type tb_bus_type = { .name = "thunderbolt", }; @@ -82,6 +114,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) tb->dev.parent = &nhi->pdev->dev; tb->dev.bus = &tb_bus_type; tb->dev.type = &tb_domain_type; + tb->dev.groups = domain_attr_groups; dev_set_name(&tb->dev, "domain%d", tb->index); device_initialize(&tb->dev); @@ -140,6 +173,12 @@ int tb_domain_add(struct tb *tb) */ tb_ctl_start(tb->ctl); + if (tb->cm_ops->driver_ready) { + ret = tb->cm_ops->driver_ready(tb); + if (ret) + goto err_ctl_stop; + } + ret = device_add(&tb->dev); if (ret) goto err_ctl_stop; @@ -231,6 +270,162 @@ int tb_domain_resume_noirq(struct tb *tb) return ret; } +int tb_domain_suspend(struct tb *tb) +{ + int ret; + + mutex_lock(&tb->lock); + if (tb->cm_ops->suspend) { + ret = tb->cm_ops->suspend(tb); + if (ret) { + mutex_unlock(&tb->lock); + return ret; + } + } + mutex_unlock(&tb->lock); + return 0; +} + +void tb_domain_complete(struct tb *tb) +{ + mutex_lock(&tb->lock); + if (tb->cm_ops->complete) + tb->cm_ops->complete(tb); + mutex_unlock(&tb->lock); +} + +/** + * tb_domain_approve_switch() - Approve switch + * @tb: Domain the switch belongs to + * @sw: Switch to approve + * + * This will approve switch by connection manager specific means. In + * case of success the connection manager will create tunnels for all + * supported protocols. + */ +int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw) +{ + struct tb_switch *parent_sw; + + if (!tb->cm_ops->approve_switch) + return -EPERM; + + /* The parent switch must be authorized before this one */ + parent_sw = tb_to_switch(sw->dev.parent); + if (!parent_sw || !parent_sw->authorized) + return -EINVAL; + + return tb->cm_ops->approve_switch(tb, sw); +} + +/** + * tb_domain_approve_switch_key() - Approve switch and add key + * @tb: Domain the switch belongs to + * @sw: Switch to approve + * + * For switches that support secure connect, this function first adds + * key to the switch NVM using connection manager specific means. If + * adding the key is successful, the switch is approved and connected. + * + * Return: %0 on success and negative errno in case of failure. + */ +int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw) +{ + struct tb_switch *parent_sw; + int ret; + + if (!tb->cm_ops->approve_switch || !tb->cm_ops->add_switch_key) + return -EPERM; + + /* The parent switch must be authorized before this one */ + parent_sw = tb_to_switch(sw->dev.parent); + if (!parent_sw || !parent_sw->authorized) + return -EINVAL; + + ret = tb->cm_ops->add_switch_key(tb, sw); + if (ret) + return ret; + + return tb->cm_ops->approve_switch(tb, sw); +} + +/** + * tb_domain_challenge_switch_key() - Challenge and approve switch + * @tb: Domain the switch belongs to + * @sw: Switch to approve + * + * For switches that support secure connect, this function generates + * random challenge and sends it to the switch. The switch responds to + * this and if the response matches our random challenge, the switch is + * approved and connected. + * + * Return: %0 on success and negative errno in case of failure. + */ +int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw) +{ + u8 challenge[TB_SWITCH_KEY_SIZE]; + u8 response[TB_SWITCH_KEY_SIZE]; + u8 hmac[TB_SWITCH_KEY_SIZE]; + struct tb_switch *parent_sw; + struct crypto_shash *tfm; + struct shash_desc *shash; + int ret; + + if (!tb->cm_ops->approve_switch || !tb->cm_ops->challenge_switch_key) + return -EPERM; + + /* The parent switch must be authorized before this one */ + parent_sw = tb_to_switch(sw->dev.parent); + if (!parent_sw || !parent_sw->authorized) + return -EINVAL; + + get_random_bytes(challenge, sizeof(challenge)); + ret = tb->cm_ops->challenge_switch_key(tb, sw, challenge, response); + if (ret) + return ret; + + tfm = crypto_alloc_shash("hmac(sha256)", 0, 0); + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + ret = crypto_shash_setkey(tfm, sw->key, TB_SWITCH_KEY_SIZE); + if (ret) + goto err_free_tfm; + + shash = kzalloc(sizeof(*shash) + crypto_shash_descsize(tfm), + GFP_KERNEL); + if (!shash) { + ret = -ENOMEM; + goto err_free_tfm; + } + + shash->tfm = tfm; + shash->flags = CRYPTO_TFM_REQ_MAY_SLEEP; + + memset(hmac, 0, sizeof(hmac)); + ret = crypto_shash_digest(shash, challenge, sizeof(hmac), hmac); + if (ret) + goto err_free_shash; + + /* The returned HMAC must match the one we calculated */ + if (memcmp(response, hmac, sizeof(hmac))) { + ret = -EKEYREJECTED; + goto err_free_shash; + } + + crypto_free_shash(tfm); + kfree(shash); + + return tb->cm_ops->approve_switch(tb, sw); + +err_free_shash: + kfree(shash); +err_free_tfm: + crypto_free_shash(tfm); + + return ret; +} + int tb_domain_init(void) { return bus_register(&tb_bus_type); diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c new file mode 100644 index 000000000000..0ffa4ec249ac --- /dev/null +++ b/drivers/thunderbolt/icm.c @@ -0,0 +1,1058 @@ +/* + * Internal Thunderbolt Connection Manager. This is a firmware running on + * the Thunderbolt host controller performing most of the low-level + * handling. + * + * Copyright (C) 2017, Intel Corporation + * Authors: Michael Jamet + * Mika Westerberg + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "ctl.h" +#include "nhi_regs.h" +#include "tb.h" + +#define PCIE2CIO_CMD 0x30 +#define PCIE2CIO_CMD_TIMEOUT BIT(31) +#define PCIE2CIO_CMD_START BIT(30) +#define PCIE2CIO_CMD_WRITE BIT(21) +#define PCIE2CIO_CMD_CS_MASK GENMASK(20, 19) +#define PCIE2CIO_CMD_CS_SHIFT 19 +#define PCIE2CIO_CMD_PORT_MASK GENMASK(18, 13) +#define PCIE2CIO_CMD_PORT_SHIFT 13 + +#define PCIE2CIO_WRDATA 0x34 +#define PCIE2CIO_RDDATA 0x38 + +#define PHY_PORT_CS1 0x37 +#define PHY_PORT_CS1_LINK_DISABLE BIT(14) +#define PHY_PORT_CS1_LINK_STATE_MASK GENMASK(29, 26) +#define PHY_PORT_CS1_LINK_STATE_SHIFT 26 + +#define ICM_TIMEOUT 5000 /* ms */ +#define ICM_MAX_LINK 4 +#define ICM_MAX_DEPTH 6 + +/** + * struct icm - Internal connection manager private data + * @request_lock: Makes sure only one message is send to ICM at time + * @rescan_work: Work used to rescan the surviving switches after resume + * @upstream_port: Pointer to the PCIe upstream port this host + * controller is connected. This is only set for systems + * where ICM needs to be started manually + * @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides + * (only set when @upstream_port is not %NULL) + * @is_supported: Checks if we can support ICM on this controller + * @get_mode: Read and return the ICM firmware mode (optional) + * @get_route: Find a route string for given switch + * @device_connected: Handle device connected ICM message + * @device_disconnected: Handle device disconnected ICM message + */ +struct icm { + struct mutex request_lock; + struct delayed_work rescan_work; + struct pci_dev *upstream_port; + int vnd_cap; + bool (*is_supported)(struct tb *tb); + int (*get_mode)(struct tb *tb); + int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route); + void (*device_connected)(struct tb *tb, + const struct icm_pkg_header *hdr); + void (*device_disconnected)(struct tb *tb, + const struct icm_pkg_header *hdr); +}; + +struct icm_notification { + struct work_struct work; + struct icm_pkg_header *pkg; + struct tb *tb; +}; + +static inline struct tb *icm_to_tb(struct icm *icm) +{ + return ((void *)icm - sizeof(struct tb)); +} + +static inline u8 phy_port_from_route(u64 route, u8 depth) +{ + return tb_switch_phy_port_from_link(route >> ((depth - 1) * 8)); +} + +static inline u8 dual_link_from_link(u8 link) +{ + return link ? ((link - 1) ^ 0x01) + 1 : 0; +} + +static inline u64 get_route(u32 route_hi, u32 route_lo) +{ + return (u64)route_hi << 32 | route_lo; +} + +static inline bool is_apple(void) +{ + return dmi_match(DMI_BOARD_VENDOR, "Apple Inc."); +} + +static bool icm_match(const struct tb_cfg_request *req, + const struct ctl_pkg *pkg) +{ + const struct icm_pkg_header *res_hdr = pkg->buffer; + const struct icm_pkg_header *req_hdr = req->request; + + if (pkg->frame.eof != req->response_type) + return false; + if (res_hdr->code != req_hdr->code) + return false; + + return true; +} + +static bool icm_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg) +{ + const struct icm_pkg_header *hdr = pkg->buffer; + + if (hdr->packet_id < req->npackets) { + size_t offset = hdr->packet_id * req->response_size; + + memcpy(req->response + offset, pkg->buffer, req->response_size); + } + + return hdr->packet_id == hdr->total_packets - 1; +} + +static int icm_request(struct tb *tb, const void *request, size_t request_size, + void *response, size_t response_size, size_t npackets, + unsigned int timeout_msec) +{ + struct icm *icm = tb_priv(tb); + int retries = 3; + + do { + struct tb_cfg_request *req; + struct tb_cfg_result res; + + req = tb_cfg_request_alloc(); + if (!req) + return -ENOMEM; + + req->match = icm_match; + req->copy = icm_copy; + req->request = request; + req->request_size = request_size; + req->request_type = TB_CFG_PKG_ICM_CMD; + req->response = response; + req->npackets = npackets; + req->response_size = response_size; + req->response_type = TB_CFG_PKG_ICM_RESP; + + mutex_lock(&icm->request_lock); + res = tb_cfg_request_sync(tb->ctl, req, timeout_msec); + mutex_unlock(&icm->request_lock); + + tb_cfg_request_put(req); + + if (res.err != -ETIMEDOUT) + return res.err == 1 ? -EIO : res.err; + + usleep_range(20, 50); + } while (retries--); + + return -ETIMEDOUT; +} + +static bool icm_fr_is_supported(struct tb *tb) +{ + return !is_apple(); +} + +static inline int icm_fr_get_switch_index(u32 port) +{ + int index; + + if ((port & ICM_PORT_TYPE_MASK) != TB_TYPE_PORT) + return 0; + + index = port >> ICM_PORT_INDEX_SHIFT; + return index != 0xff ? index : 0; +} + +static int icm_fr_get_route(struct tb *tb, u8 link, u8 depth, u64 *route) +{ + struct icm_fr_pkg_get_topology_response *switches, *sw; + struct icm_fr_pkg_get_topology request = { + .hdr = { .code = ICM_GET_TOPOLOGY }, + }; + size_t npackets = ICM_GET_TOPOLOGY_PACKETS; + int ret, index; + u8 i; + + switches = kcalloc(npackets, sizeof(*switches), GFP_KERNEL); + if (!switches) + return -ENOMEM; + + ret = icm_request(tb, &request, sizeof(request), switches, + sizeof(*switches), npackets, ICM_TIMEOUT); + if (ret) + goto err_free; + + sw = &switches[0]; + index = icm_fr_get_switch_index(sw->ports[link]); + if (!index) { + ret = -ENODEV; + goto err_free; + } + + sw = &switches[index]; + for (i = 1; i < depth; i++) { + unsigned int j; + + if (!(sw->first_data & ICM_SWITCH_USED)) { + ret = -ENODEV; + goto err_free; + } + + for (j = 0; j < ARRAY_SIZE(sw->ports); j++) { + index = icm_fr_get_switch_index(sw->ports[j]); + if (index > sw->switch_index) { + sw = &switches[index]; + break; + } + } + } + + *route = get_route(sw->route_hi, sw->route_lo); + +err_free: + kfree(switches); + return ret; +} + +static int icm_fr_approve_switch(struct tb *tb, struct tb_switch *sw) +{ + struct icm_fr_pkg_approve_device request; + struct icm_fr_pkg_approve_device reply; + int ret; + + memset(&request, 0, sizeof(request)); + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); + request.hdr.code = ICM_APPROVE_DEVICE; + request.connection_id = sw->connection_id; + request.connection_key = sw->connection_key; + + memset(&reply, 0, sizeof(reply)); + /* Use larger timeout as establishing tunnels can take some time */ + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), + 1, 10000); + if (ret) + return ret; + + if (reply.hdr.flags & ICM_FLAGS_ERROR) { + tb_warn(tb, "PCIe tunnel creation failed\n"); + return -EIO; + } + + return 0; +} + +static int icm_fr_add_switch_key(struct tb *tb, struct tb_switch *sw) +{ + struct icm_fr_pkg_add_device_key request; + struct icm_fr_pkg_add_device_key_response reply; + int ret; + + memset(&request, 0, sizeof(request)); + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); + request.hdr.code = ICM_ADD_DEVICE_KEY; + request.connection_id = sw->connection_id; + request.connection_key = sw->connection_key; + memcpy(request.key, sw->key, TB_SWITCH_KEY_SIZE); + + memset(&reply, 0, sizeof(reply)); + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), + 1, ICM_TIMEOUT); + if (ret) + return ret; + + if (reply.hdr.flags & ICM_FLAGS_ERROR) { + tb_warn(tb, "Adding key to switch failed\n"); + return -EIO; + } + + return 0; +} + +static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw, + const u8 *challenge, u8 *response) +{ + struct icm_fr_pkg_challenge_device request; + struct icm_fr_pkg_challenge_device_response reply; + int ret; + + memset(&request, 0, sizeof(request)); + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); + request.hdr.code = ICM_CHALLENGE_DEVICE; + request.connection_id = sw->connection_id; + request.connection_key = sw->connection_key; + memcpy(request.challenge, challenge, TB_SWITCH_KEY_SIZE); + + memset(&reply, 0, sizeof(reply)); + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), + 1, ICM_TIMEOUT); + if (ret) + return ret; + + if (reply.hdr.flags & ICM_FLAGS_ERROR) + return -EKEYREJECTED; + if (reply.hdr.flags & ICM_FLAGS_NO_KEY) + return -ENOKEY; + + memcpy(response, reply.response, TB_SWITCH_KEY_SIZE); + + return 0; +} + +static void remove_switch(struct tb_switch *sw) +{ + struct tb_switch *parent_sw; + + parent_sw = tb_to_switch(sw->dev.parent); + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; + tb_switch_remove(sw); +} + +static void +icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) +{ + const struct icm_fr_event_device_connected *pkg = + (const struct icm_fr_event_device_connected *)hdr; + struct tb_switch *sw, *parent_sw; + struct icm *icm = tb_priv(tb); + bool authorized = false; + u8 link, depth; + u64 route; + int ret; + + link = pkg->link_info & ICM_LINK_INFO_LINK_MASK; + depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> + ICM_LINK_INFO_DEPTH_SHIFT; + authorized = pkg->link_info & ICM_LINK_INFO_APPROVED; + + ret = icm->get_route(tb, link, depth, &route); + if (ret) { + tb_err(tb, "failed to find route string for switch at %u.%u\n", + link, depth); + return; + } + + sw = tb_switch_find_by_uuid(tb, &pkg->ep_uuid); + if (sw) { + u8 phy_port, sw_phy_port; + + parent_sw = tb_to_switch(sw->dev.parent); + sw_phy_port = phy_port_from_route(tb_route(sw), sw->depth); + phy_port = phy_port_from_route(route, depth); + + /* + * On resume ICM will send us connected events for the + * devices that still are present. However, that + * information might have changed for example by the + * fact that a switch on a dual-link connection might + * have been enumerated using the other link now. Make + * sure our book keeping matches that. + */ + if (sw->depth == depth && sw_phy_port == phy_port && + !!sw->authorized == authorized) { + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; + tb_port_at(route, parent_sw)->remote = + tb_upstream_port(sw); + sw->config.route_hi = upper_32_bits(route); + sw->config.route_lo = lower_32_bits(route); + sw->connection_id = pkg->connection_id; + sw->connection_key = pkg->connection_key; + sw->link = link; + sw->depth = depth; + sw->is_unplugged = false; + tb_switch_put(sw); + return; + } + + /* + * User connected the same switch to another physical + * port or to another part of the topology. Remove the + * existing switch now before adding the new one. + */ + remove_switch(sw); + tb_switch_put(sw); + } + + /* + * If the switch was not found by UUID, look for a switch on + * same physical port (taking possible link aggregation into + * account) and depth. If we found one it is definitely a stale + * one so remove it first. + */ + sw = tb_switch_find_by_link_depth(tb, link, depth); + if (!sw) { + u8 dual_link; + + dual_link = dual_link_from_link(link); + if (dual_link) + sw = tb_switch_find_by_link_depth(tb, dual_link, depth); + } + if (sw) { + remove_switch(sw); + tb_switch_put(sw); + } + + parent_sw = tb_switch_find_by_link_depth(tb, link, depth - 1); + if (!parent_sw) { + tb_err(tb, "failed to find parent switch for %u.%u\n", + link, depth); + return; + } + + sw = tb_switch_alloc(tb, &parent_sw->dev, route); + if (!sw) { + tb_switch_put(parent_sw); + return; + } + + sw->uuid = kmemdup(&pkg->ep_uuid, sizeof(pkg->ep_uuid), GFP_KERNEL); + sw->connection_id = pkg->connection_id; + sw->connection_key = pkg->connection_key; + sw->link = link; + sw->depth = depth; + sw->authorized = authorized; + sw->security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> + ICM_FLAGS_SLEVEL_SHIFT; + + /* Link the two switches now */ + tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); + tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); + + ret = tb_switch_add(sw); + if (ret) { + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; + tb_switch_put(sw); + } + tb_switch_put(parent_sw); +} + +static void +icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr) +{ + const struct icm_fr_event_device_disconnected *pkg = + (const struct icm_fr_event_device_disconnected *)hdr; + struct tb_switch *sw; + u8 link, depth; + + link = pkg->link_info & ICM_LINK_INFO_LINK_MASK; + depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> + ICM_LINK_INFO_DEPTH_SHIFT; + + if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) { + tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth); + return; + } + + sw = tb_switch_find_by_link_depth(tb, link, depth); + if (!sw) { + tb_warn(tb, "no switch exists at %u.%u, ignoring\n", link, + depth); + return; + } + + remove_switch(sw); + tb_switch_put(sw); +} + +static struct pci_dev *get_upstream_port(struct pci_dev *pdev) +{ + struct pci_dev *parent; + + parent = pci_upstream_bridge(pdev); + while (parent) { + if (!pci_is_pcie(parent)) + return NULL; + if (pci_pcie_type(parent) == PCI_EXP_TYPE_UPSTREAM) + break; + parent = pci_upstream_bridge(parent); + } + + if (!parent) + return NULL; + + switch (parent->device) { + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE: + return parent; + } + + return NULL; +} + +static bool icm_ar_is_supported(struct tb *tb) +{ + struct pci_dev *upstream_port; + struct icm *icm = tb_priv(tb); + + /* + * Starting from Alpine Ridge we can use ICM on Apple machines + * as well. We just need to reset and re-enable it first. + */ + if (!is_apple()) + return true; + + /* + * Find the upstream PCIe port in case we need to do reset + * through its vendor specific registers. + */ + upstream_port = get_upstream_port(tb->nhi->pdev); + if (upstream_port) { + int cap; + + cap = pci_find_ext_capability(upstream_port, + PCI_EXT_CAP_ID_VNDR); + if (cap > 0) { + icm->upstream_port = upstream_port; + icm->vnd_cap = cap; + + return true; + } + } + + return false; +} + +static int icm_ar_get_mode(struct tb *tb) +{ + struct tb_nhi *nhi = tb->nhi; + int retries = 5; + u32 val; + + do { + val = ioread32(nhi->iobase + REG_FW_STS); + if (val & REG_FW_STS_NVM_AUTH_DONE) + break; + msleep(30); + } while (--retries); + + if (!retries) { + dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n"); + return -ENODEV; + } + + return nhi_mailbox_mode(nhi); +} + +static int icm_ar_get_route(struct tb *tb, u8 link, u8 depth, u64 *route) +{ + struct icm_ar_pkg_get_route_response reply; + struct icm_ar_pkg_get_route request = { + .hdr = { .code = ICM_GET_ROUTE }, + .link_info = depth << ICM_LINK_INFO_DEPTH_SHIFT | link, + }; + int ret; + + memset(&reply, 0, sizeof(reply)); + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), + 1, ICM_TIMEOUT); + if (ret) + return ret; + + if (reply.hdr.flags & ICM_FLAGS_ERROR) + return -EIO; + + *route = get_route(reply.route_hi, reply.route_lo); + return 0; +} + +static void icm_handle_notification(struct work_struct *work) +{ + struct icm_notification *n = container_of(work, typeof(*n), work); + struct tb *tb = n->tb; + struct icm *icm = tb_priv(tb); + + mutex_lock(&tb->lock); + + switch (n->pkg->code) { + case ICM_EVENT_DEVICE_CONNECTED: + icm->device_connected(tb, n->pkg); + break; + case ICM_EVENT_DEVICE_DISCONNECTED: + icm->device_disconnected(tb, n->pkg); + break; + } + + mutex_unlock(&tb->lock); + + kfree(n->pkg); + kfree(n); +} + +static void icm_handle_event(struct tb *tb, enum tb_cfg_pkg_type type, + const void *buf, size_t size) +{ + struct icm_notification *n; + + n = kmalloc(sizeof(*n), GFP_KERNEL); + if (!n) + return; + + INIT_WORK(&n->work, icm_handle_notification); + n->pkg = kmemdup(buf, size, GFP_KERNEL); + n->tb = tb; + + queue_work(tb->wq, &n->work); +} + +static int +__icm_driver_ready(struct tb *tb, enum tb_security_level *security_level) +{ + struct icm_pkg_driver_ready_response reply; + struct icm_pkg_driver_ready request = { + .hdr.code = ICM_DRIVER_READY, + }; + unsigned int retries = 10; + int ret; + + memset(&reply, 0, sizeof(reply)); + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), + 1, ICM_TIMEOUT); + if (ret) + return ret; + + if (security_level) + *security_level = reply.security_level & 0xf; + + /* + * Hold on here until the switch config space is accessible so + * that we can read root switch config successfully. + */ + do { + struct tb_cfg_result res; + u32 tmp; + + res = tb_cfg_read_raw(tb->ctl, &tmp, 0, 0, TB_CFG_SWITCH, + 0, 1, 100); + if (!res.err) + return 0; + + msleep(50); + } while (--retries); + + return -ETIMEDOUT; +} + +static int pci2cio_wait_completion(struct icm *icm, unsigned long timeout_msec) +{ + unsigned long end = jiffies + msecs_to_jiffies(timeout_msec); + u32 cmd; + + do { + pci_read_config_dword(icm->upstream_port, + icm->vnd_cap + PCIE2CIO_CMD, &cmd); + if (!(cmd & PCIE2CIO_CMD_START)) { + if (cmd & PCIE2CIO_CMD_TIMEOUT) + break; + return 0; + } + + msleep(50); + } while (time_before(jiffies, end)); + + return -ETIMEDOUT; +} + +static int pcie2cio_read(struct icm *icm, enum tb_cfg_space cs, + unsigned int port, unsigned int index, u32 *data) +{ + struct pci_dev *pdev = icm->upstream_port; + int ret, vnd_cap = icm->vnd_cap; + u32 cmd; + + cmd = index; + cmd |= (port << PCIE2CIO_CMD_PORT_SHIFT) & PCIE2CIO_CMD_PORT_MASK; + cmd |= (cs << PCIE2CIO_CMD_CS_SHIFT) & PCIE2CIO_CMD_CS_MASK; + cmd |= PCIE2CIO_CMD_START; + pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_CMD, cmd); + + ret = pci2cio_wait_completion(icm, 5000); + if (ret) + return ret; + + pci_read_config_dword(pdev, vnd_cap + PCIE2CIO_RDDATA, data); + return 0; +} + +static int pcie2cio_write(struct icm *icm, enum tb_cfg_space cs, + unsigned int port, unsigned int index, u32 data) +{ + struct pci_dev *pdev = icm->upstream_port; + int vnd_cap = icm->vnd_cap; + u32 cmd; + + pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_WRDATA, data); + + cmd = index; + cmd |= (port << PCIE2CIO_CMD_PORT_SHIFT) & PCIE2CIO_CMD_PORT_MASK; + cmd |= (cs << PCIE2CIO_CMD_CS_SHIFT) & PCIE2CIO_CMD_CS_MASK; + cmd |= PCIE2CIO_CMD_WRITE | PCIE2CIO_CMD_START; + pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_CMD, cmd); + + return pci2cio_wait_completion(icm, 5000); +} + +static int icm_firmware_reset(struct tb *tb, struct tb_nhi *nhi) +{ + struct icm *icm = tb_priv(tb); + u32 val; + + /* Put ARC to wait for CIO reset event to happen */ + val = ioread32(nhi->iobase + REG_FW_STS); + val |= REG_FW_STS_CIO_RESET_REQ; + iowrite32(val, nhi->iobase + REG_FW_STS); + + /* Re-start ARC */ + val = ioread32(nhi->iobase + REG_FW_STS); + val |= REG_FW_STS_ICM_EN_INVERT; + val |= REG_FW_STS_ICM_EN_CPU; + iowrite32(val, nhi->iobase + REG_FW_STS); + + /* Trigger CIO reset now */ + return pcie2cio_write(icm, TB_CFG_SWITCH, 0, 0x50, BIT(9)); +} + +static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi) +{ + unsigned int retries = 10; + int ret; + u32 val; + + /* Check if the ICM firmware is already running */ + val = ioread32(nhi->iobase + REG_FW_STS); + if (val & REG_FW_STS_ICM_EN) + return 0; + + dev_info(&nhi->pdev->dev, "starting ICM firmware\n"); + + ret = icm_firmware_reset(tb, nhi); + if (ret) + return ret; + + /* Wait until the ICM firmware tells us it is up and running */ + do { + /* Check that the ICM firmware is running */ + val = ioread32(nhi->iobase + REG_FW_STS); + if (val & REG_FW_STS_NVM_AUTH_DONE) + return 0; + + msleep(300); + } while (--retries); + + return -ETIMEDOUT; +} + +static int icm_reset_phy_port(struct tb *tb, int phy_port) +{ + struct icm *icm = tb_priv(tb); + u32 state0, state1; + int port0, port1; + u32 val0, val1; + int ret; + + if (!icm->upstream_port) + return 0; + + if (phy_port) { + port0 = 3; + port1 = 4; + } else { + port0 = 1; + port1 = 2; + } + + /* + * Read link status of both null ports belonging to a single + * physical port. + */ + ret = pcie2cio_read(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, &val0); + if (ret) + return ret; + ret = pcie2cio_read(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, &val1); + if (ret) + return ret; + + state0 = val0 & PHY_PORT_CS1_LINK_STATE_MASK; + state0 >>= PHY_PORT_CS1_LINK_STATE_SHIFT; + state1 = val1 & PHY_PORT_CS1_LINK_STATE_MASK; + state1 >>= PHY_PORT_CS1_LINK_STATE_SHIFT; + + /* If they are both up we need to reset them now */ + if (state0 != TB_PORT_UP || state1 != TB_PORT_UP) + return 0; + + val0 |= PHY_PORT_CS1_LINK_DISABLE; + ret = pcie2cio_write(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, val0); + if (ret) + return ret; + + val1 |= PHY_PORT_CS1_LINK_DISABLE; + ret = pcie2cio_write(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, val1); + if (ret) + return ret; + + /* Wait a bit and then re-enable both ports */ + usleep_range(10, 100); + + ret = pcie2cio_read(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, &val0); + if (ret) + return ret; + ret = pcie2cio_read(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, &val1); + if (ret) + return ret; + + val0 &= ~PHY_PORT_CS1_LINK_DISABLE; + ret = pcie2cio_write(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, val0); + if (ret) + return ret; + + val1 &= ~PHY_PORT_CS1_LINK_DISABLE; + return pcie2cio_write(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, val1); +} + +static int icm_firmware_init(struct tb *tb) +{ + struct icm *icm = tb_priv(tb); + struct tb_nhi *nhi = tb->nhi; + int ret; + + ret = icm_firmware_start(tb, nhi); + if (ret) { + dev_err(&nhi->pdev->dev, "could not start ICM firmware\n"); + return ret; + } + + if (icm->get_mode) { + ret = icm->get_mode(tb); + + switch (ret) { + case NHI_FW_CM_MODE: + /* Ask ICM to accept all Thunderbolt devices */ + nhi_mailbox_cmd(nhi, NHI_MAILBOX_ALLOW_ALL_DEVS, 0); + break; + + default: + tb_err(tb, "ICM firmware is in wrong mode: %u\n", ret); + return -ENODEV; + } + } + + /* + * Reset both physical ports if there is anything connected to + * them already. + */ + ret = icm_reset_phy_port(tb, 0); + if (ret) + dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n"); + ret = icm_reset_phy_port(tb, 1); + if (ret) + dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n"); + + return 0; +} + +static int icm_driver_ready(struct tb *tb) +{ + int ret; + + ret = icm_firmware_init(tb); + if (ret) + return ret; + + return __icm_driver_ready(tb, &tb->security_level); +} + +static int icm_suspend(struct tb *tb) +{ + return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_SAVE_DEVS, 0); +} + +/* + * Mark all switches (except root switch) below this one unplugged. ICM + * firmware will send us an updated list of switches after we have send + * it driver ready command. If a switch is not in that list it will be + * removed when we perform rescan. + */ +static void icm_unplug_children(struct tb_switch *sw) +{ + unsigned int i; + + if (tb_route(sw)) + sw->is_unplugged = true; + + for (i = 1; i <= sw->config.max_port_number; i++) { + struct tb_port *port = &sw->ports[i]; + + if (tb_is_upstream_port(port)) + continue; + if (!port->remote) + continue; + + icm_unplug_children(port->remote->sw); + } +} + +static void icm_free_unplugged_children(struct tb_switch *sw) +{ + unsigned int i; + + for (i = 1; i <= sw->config.max_port_number; i++) { + struct tb_port *port = &sw->ports[i]; + + if (tb_is_upstream_port(port)) + continue; + if (!port->remote) + continue; + + if (port->remote->sw->is_unplugged) { + tb_switch_remove(port->remote->sw); + port->remote = NULL; + } else { + icm_free_unplugged_children(port->remote->sw); + } + } +} + +static void icm_rescan_work(struct work_struct *work) +{ + struct icm *icm = container_of(work, struct icm, rescan_work.work); + struct tb *tb = icm_to_tb(icm); + + mutex_lock(&tb->lock); + if (tb->root_switch) + icm_free_unplugged_children(tb->root_switch); + mutex_unlock(&tb->lock); +} + +static void icm_complete(struct tb *tb) +{ + struct icm *icm = tb_priv(tb); + + if (tb->nhi->going_away) + return; + + icm_unplug_children(tb->root_switch); + + /* + * Now all existing children should be resumed, start events + * from ICM to get updated status. + */ + __icm_driver_ready(tb, NULL); + + /* + * We do not get notifications of devices that have been + * unplugged during suspend so schedule rescan to clean them up + * if any. + */ + queue_delayed_work(tb->wq, &icm->rescan_work, msecs_to_jiffies(500)); +} + +static int icm_start(struct tb *tb) +{ + int ret; + + tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); + if (!tb->root_switch) + return -ENODEV; + + ret = tb_switch_add(tb->root_switch); + if (ret) + tb_switch_put(tb->root_switch); + + return ret; +} + +static void icm_stop(struct tb *tb) +{ + struct icm *icm = tb_priv(tb); + + cancel_delayed_work(&icm->rescan_work); + tb_switch_remove(tb->root_switch); + tb->root_switch = NULL; + nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0); +} + +/* Falcon Ridge and Alpine Ridge */ +static const struct tb_cm_ops icm_fr_ops = { + .driver_ready = icm_driver_ready, + .start = icm_start, + .stop = icm_stop, + .suspend = icm_suspend, + .complete = icm_complete, + .handle_event = icm_handle_event, + .approve_switch = icm_fr_approve_switch, + .add_switch_key = icm_fr_add_switch_key, + .challenge_switch_key = icm_fr_challenge_switch_key, +}; + +struct tb *icm_probe(struct tb_nhi *nhi) +{ + struct icm *icm; + struct tb *tb; + + tb = tb_domain_alloc(nhi, sizeof(struct icm)); + if (!tb) + return NULL; + + icm = tb_priv(tb); + INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work); + mutex_init(&icm->request_lock); + + switch (nhi->pdev->device) { + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: + icm->is_supported = icm_fr_is_supported; + icm->get_route = icm_fr_get_route; + icm->device_connected = icm_fr_device_connected; + icm->device_disconnected = icm_fr_device_disconnected; + tb->cm_ops = &icm_fr_ops; + break; + + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI: + case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI: + icm->is_supported = icm_ar_is_supported; + icm->get_mode = icm_ar_get_mode; + icm->get_route = icm_ar_get_route; + icm->device_connected = icm_fr_device_connected; + icm->device_disconnected = icm_fr_device_disconnected; + tb->cm_ops = &icm_fr_ops; + break; + } + + if (!icm->is_supported || !icm->is_supported(tb)) { + dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n"); + tb_domain_put(tb); + return NULL; + } + + return tb; +} diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index 14311535661d..05af126a2435 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include "nhi.h" @@ -668,6 +667,22 @@ static int nhi_resume_noirq(struct device *dev) return tb_domain_resume_noirq(tb); } +static int nhi_suspend(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct tb *tb = pci_get_drvdata(pdev); + + return tb_domain_suspend(tb); +} + +static void nhi_complete(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct tb *tb = pci_get_drvdata(pdev); + + tb_domain_complete(tb); +} + static void nhi_shutdown(struct tb_nhi *nhi) { int i; @@ -784,10 +799,16 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) /* magic value - clock related? */ iowrite32(3906250 / 10000, nhi->iobase + 0x38c00); - dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n"); - tb = tb_probe(nhi); + tb = icm_probe(nhi); if (!tb) + tb = tb_probe(nhi); + if (!tb) { + dev_err(&nhi->pdev->dev, + "failed to determine connection manager, aborting\n"); return -ENODEV; + } + + dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n"); res = tb_domain_add(tb); if (res) { @@ -826,6 +847,10 @@ static const struct dev_pm_ops nhi_pm_ops = { * pci-tunnels stay alive. */ .restore_noirq = nhi_resume_noirq, + .suspend = nhi_suspend, + .freeze = nhi_suspend, + .poweroff = nhi_suspend, + .complete = nhi_complete, }; static struct pci_device_id nhi_ids[] = { @@ -886,8 +911,6 @@ static int __init nhi_init(void) { int ret; - if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc.")) - return -ENOSYS; ret = tb_domain_init(); if (ret) return ret; diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h index 322fe1fa3a3c..09ed574e92ff 100644 --- a/drivers/thunderbolt/nhi_regs.h +++ b/drivers/thunderbolt/nhi_regs.h @@ -118,4 +118,11 @@ struct ring_desc { #define REG_OUTMAIL_CMD_OPMODE_SHIFT 8 #define REG_OUTMAIL_CMD_OPMODE_MASK GENMASK(11, 8) +#define REG_FW_STS 0x39944 +#define REG_FW_STS_NVM_AUTH_DONE BIT(31) +#define REG_FW_STS_CIO_RESET_REQ BIT(30) +#define REG_FW_STS_ICM_EN_CPU BIT(2) +#define REG_FW_STS_ICM_EN_INVERT BIT(1) +#define REG_FW_STS_ICM_EN BIT(0) + #endif diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 4b47e0999cda..1524edf42ee8 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -9,6 +9,9 @@ #include "tb.h" +/* Switch authorization from userspace is serialized by this lock */ +static DEFINE_MUTEX(switch_lock); + /* port utility functions */ static const char *tb_port_type(struct tb_regs_port_header *port) @@ -310,6 +313,75 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active) sw->cap_plug_events + 1, 1); } +static ssize_t authorized_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + + return sprintf(buf, "%u\n", sw->authorized); +} + +static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val) +{ + int ret = -EINVAL; + + if (mutex_lock_interruptible(&switch_lock)) + return -ERESTARTSYS; + + if (sw->authorized) + goto unlock; + + switch (val) { + /* Approve switch */ + case 1: + if (sw->key) + ret = tb_domain_approve_switch_key(sw->tb, sw); + else + ret = tb_domain_approve_switch(sw->tb, sw); + break; + + /* Challenge switch */ + case 2: + if (sw->key) + ret = tb_domain_challenge_switch_key(sw->tb, sw); + break; + + default: + break; + } + + if (!ret) { + sw->authorized = val; + /* Notify status change to the userspace */ + kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE); + } + +unlock: + mutex_unlock(&switch_lock); + return ret; +} + +static ssize_t authorized_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct tb_switch *sw = tb_to_switch(dev); + unsigned int val; + ssize_t ret; + + ret = kstrtouint(buf, 0, &val); + if (ret) + return ret; + if (val > 2) + return -EINVAL; + + ret = tb_switch_set_authorized(sw, val); + + return ret ? ret : count; +} +static DEVICE_ATTR_RW(authorized); + static ssize_t device_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -328,6 +400,54 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf) } static DEVICE_ATTR_RO(device_name); +static ssize_t key_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + ssize_t ret; + + if (mutex_lock_interruptible(&switch_lock)) + return -ERESTARTSYS; + + if (sw->key) + ret = sprintf(buf, "%*phN\n", TB_SWITCH_KEY_SIZE, sw->key); + else + ret = sprintf(buf, "\n"); + + mutex_unlock(&switch_lock); + return ret; +} + +static ssize_t key_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct tb_switch *sw = tb_to_switch(dev); + u8 key[TB_SWITCH_KEY_SIZE]; + ssize_t ret = count; + + if (count < 64) + return -EINVAL; + + if (hex2bin(key, buf, sizeof(key))) + return -EINVAL; + + if (mutex_lock_interruptible(&switch_lock)) + return -ERESTARTSYS; + + if (sw->authorized) { + ret = -EBUSY; + } else { + kfree(sw->key); + sw->key = kmemdup(key, sizeof(key), GFP_KERNEL); + if (!sw->key) + ret = -ENOMEM; + } + + mutex_unlock(&switch_lock); + return ret; +} +static DEVICE_ATTR_RW(key); + static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -356,15 +476,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr, static DEVICE_ATTR_RO(unique_id); static struct attribute *switch_attrs[] = { + &dev_attr_authorized.attr, &dev_attr_device.attr, &dev_attr_device_name.attr, + &dev_attr_key.attr, &dev_attr_vendor.attr, &dev_attr_vendor_name.attr, &dev_attr_unique_id.attr, NULL, }; +static umode_t switch_attr_is_visible(struct kobject *kobj, + struct attribute *attr, int n) +{ + struct device *dev = container_of(kobj, struct device, kobj); + struct tb_switch *sw = tb_to_switch(dev); + + if (attr == &dev_attr_key.attr) { + if (tb_route(sw) && + sw->tb->security_level == TB_SECURITY_SECURE && + sw->security_level == TB_SECURITY_SECURE) + return attr->mode; + return 0; + } + + return attr->mode; +} + static struct attribute_group switch_group = { + .is_visible = switch_attr_is_visible, .attrs = switch_attrs, }; @@ -384,6 +524,7 @@ static void tb_switch_release(struct device *dev) kfree(sw->vendor_name); kfree(sw->ports); kfree(sw->drom); + kfree(sw->key); kfree(sw); } @@ -490,6 +631,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, } sw->cap_plug_events = cap; + /* Root switch is always authorized */ + if (!route) + sw->authorized = true; + device_initialize(&sw->dev); sw->dev.parent = parent; sw->dev.bus = &tb_bus_type; @@ -754,3 +899,80 @@ void tb_switch_suspend(struct tb_switch *sw) * effect? */ } + +struct tb_sw_lookup { + struct tb *tb; + u8 link; + u8 depth; + const uuid_be *uuid; +}; + +static int tb_switch_match(struct device *dev, void *data) +{ + struct tb_switch *sw = tb_to_switch(dev); + struct tb_sw_lookup *lookup = data; + + if (!sw) + return 0; + if (sw->tb != lookup->tb) + return 0; + + if (lookup->uuid) + return !memcmp(sw->uuid, lookup->uuid, sizeof(*lookup->uuid)); + + /* Root switch is matched only by depth */ + if (!lookup->depth) + return !sw->depth; + + return sw->link == lookup->link && sw->depth == lookup->depth; +} + +/** + * tb_switch_find_by_link_depth() - Find switch by link and depth + * @tb: Domain the switch belongs + * @link: Link number the switch is connected + * @depth: Depth of the switch in link + * + * Returned switch has reference count increased so the caller needs to + * call tb_switch_put() when done with the switch. + */ +struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, u8 depth) +{ + struct tb_sw_lookup lookup; + struct device *dev; + + memset(&lookup, 0, sizeof(lookup)); + lookup.tb = tb; + lookup.link = link; + lookup.depth = depth; + + dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match); + if (dev) + return tb_to_switch(dev); + + return NULL; +} + +/** + * tb_switch_find_by_link_depth() - Find switch by UUID + * @tb: Domain the switch belongs + * @uuid: UUID to look for + * + * Returned switch has reference count increased so the caller needs to + * call tb_switch_put() when done with the switch. + */ +struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid) +{ + struct tb_sw_lookup lookup; + struct device *dev; + + memset(&lookup, 0, sizeof(lookup)); + lookup.tb = tb; + lookup.uuid = uuid; + + dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match); + if (dev) + return tb_to_switch(dev); + + return NULL; +} diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index ea9de49b5e10..ad2304bad592 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "tb.h" #include "tb_regs.h" @@ -71,6 +72,8 @@ static void tb_scan_port(struct tb_port *port) return; } + sw->authorized = true; + if (tb_switch_add(sw)) { tb_switch_put(sw); return; @@ -443,10 +446,14 @@ struct tb *tb_probe(struct tb_nhi *nhi) struct tb_cm *tcm; struct tb *tb; + if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc.")) + return NULL; + tb = tb_domain_alloc(nhi, sizeof(*tcm)); if (!tb) return NULL; + tb->security_level = TB_SECURITY_NONE; tb->cm_ops = &tb_cm_ops; tcm = tb_priv(tb); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 31521c531715..a998b3a251d5 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -14,6 +14,24 @@ #include "ctl.h" #include "dma_port.h" +/** + * enum tb_security_level - Thunderbolt security level + * @TB_SECURITY_NONE: No security, legacy mode + * @TB_SECURITY_USER: User approval required at minimum + * @TB_SECURITY_SECURE: One time saved key required at minimum + * @TB_SECURITY_DPONLY: Only tunnel Display port (and USB) + */ +enum tb_security_level { + TB_SECURITY_NONE, + TB_SECURITY_USER, + TB_SECURITY_SECURE, + TB_SECURITY_DPONLY, +}; + +#define TB_SWITCH_KEY_SIZE 32 +/* Each physical port contains 2 links on modern controllers */ +#define TB_SWITCH_LINKS_PER_PHY_PORT 2 + /** * struct tb_switch - a thunderbolt switch * @dev: Device for the switch @@ -33,6 +51,19 @@ * @cap_plug_events: Offset to the plug events capability (%0 if not found) * @is_unplugged: The switch is going away * @drom: DROM of the switch (%NULL if not found) + * @authorized: Whether the switch is authorized by user or policy + * @work: Work used to automatically authorize a switch + * @security_level: Switch supported security level + * @key: Contains the key used to challenge the device or %NULL if not + * supported. Size of the key is %TB_SWITCH_KEY_SIZE. + * @connection_id: Connection ID used with ICM messaging + * @connection_key: Connection key used with ICM messaging + * @link: Root switch link this switch is connected (ICM only) + * @depth: Depth in the chain this switch is connected (ICM only) + * + * When the switch is being added or removed to the domain (other + * switches) you need to have domain lock held. For switch authorization + * internal switch_lock is enough. */ struct tb_switch { struct device dev; @@ -50,6 +81,14 @@ struct tb_switch { int cap_plug_events; bool is_unplugged; u8 *drom; + unsigned int authorized; + struct work_struct work; + enum tb_security_level security_level; + u8 *key; + u8 connection_id; + u8 connection_key; + u8 link; + u8 depth; }; /** @@ -121,19 +160,33 @@ struct tb_path { /** * struct tb_cm_ops - Connection manager specific operations vector + * @driver_ready: Called right after control channel is started. Used by + * ICM to send driver ready message to the firmware. * @start: Starts the domain * @stop: Stops the domain * @suspend_noirq: Connection manager specific suspend_noirq * @resume_noirq: Connection manager specific resume_noirq + * @suspend: Connection manager specific suspend + * @complete: Connection manager specific complete * @handle_event: Handle thunderbolt event + * @approve_switch: Approve switch + * @add_switch_key: Add key to switch + * @challenge_switch_key: Challenge switch using key */ struct tb_cm_ops { + int (*driver_ready)(struct tb *tb); int (*start)(struct tb *tb); void (*stop)(struct tb *tb); int (*suspend_noirq)(struct tb *tb); int (*resume_noirq)(struct tb *tb); + int (*suspend)(struct tb *tb); + void (*complete)(struct tb *tb); void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type, const void *buf, size_t size); + int (*approve_switch)(struct tb *tb, struct tb_switch *sw); + int (*add_switch_key)(struct tb *tb, struct tb_switch *sw); + int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, + const u8 *challenge, u8 *response); }; /** @@ -147,6 +200,7 @@ struct tb_cm_ops { * @root_switch: Root switch of this domain * @cm_ops: Connection manager specific operations vector * @index: Linux assigned domain number + * @security_level: Current security level * @privdata: Private connection manager specific data */ struct tb { @@ -158,6 +212,7 @@ struct tb { struct tb_switch *root_switch; const struct tb_cm_ops *cm_ops; int index; + enum tb_security_level security_level; unsigned long privdata[0]; }; @@ -188,6 +243,16 @@ static inline u64 tb_route(struct tb_switch *sw) return ((u64) sw->config.route_hi) << 32 | sw->config.route_lo; } +static inline struct tb_port *tb_port_at(u64 route, struct tb_switch *sw) +{ + u8 port; + + port = route >> (sw->config.depth * 8); + if (WARN_ON(port > sw->config.max_port_number)) + return NULL; + return &sw->ports[port]; +} + static inline int tb_sw_read(struct tb_switch *sw, void *buffer, enum tb_cfg_space space, u32 offset, u32 length) { @@ -266,6 +331,7 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer, #define tb_port_info(port, fmt, arg...) \ __TB_PORT_PRINT(tb_info, port, fmt, ##arg) +struct tb *icm_probe(struct tb_nhi *nhi); struct tb *tb_probe(struct tb_nhi *nhi); extern struct bus_type tb_bus_type; @@ -280,6 +346,11 @@ int tb_domain_add(struct tb *tb); void tb_domain_remove(struct tb *tb); int tb_domain_suspend_noirq(struct tb *tb); int tb_domain_resume_noirq(struct tb *tb); +int tb_domain_suspend(struct tb *tb); +void tb_domain_complete(struct tb *tb); +int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw); +int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw); +int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw); static inline void tb_domain_put(struct tb *tb) { @@ -296,6 +367,14 @@ int tb_switch_resume(struct tb_switch *sw); int tb_switch_reset(struct tb *tb, u64 route); void tb_sw_set_unplugged(struct tb_switch *sw); struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route); +struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, + u8 depth); +struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid); + +static inline unsigned int tb_switch_phy_port_from_link(unsigned int link) +{ + return (link - 1) / TB_SWITCH_LINKS_PER_PHY_PORT; +} static inline void tb_switch_put(struct tb_switch *sw) { diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h index 761d56287149..85b6d33c0919 100644 --- a/drivers/thunderbolt/tb_msgs.h +++ b/drivers/thunderbolt/tb_msgs.h @@ -13,6 +13,7 @@ #define _TB_MSGS #include +#include enum tb_cfg_pkg_type { TB_CFG_PKG_READ = 1, @@ -24,6 +25,9 @@ enum tb_cfg_pkg_type { TB_CFG_PKG_XDOMAIN_RESP = 7, TB_CFG_PKG_OVERRIDE = 8, TB_CFG_PKG_RESET = 9, + TB_CFG_PKG_ICM_EVENT = 10, + TB_CFG_PKG_ICM_CMD = 11, + TB_CFG_PKG_ICM_RESP = 12, TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd, }; @@ -105,4 +109,152 @@ struct cfg_pts_pkg { u32 data; } __packed; +/* ICM messages */ + +enum icm_pkg_code { + ICM_GET_TOPOLOGY = 0x1, + ICM_DRIVER_READY = 0x3, + ICM_APPROVE_DEVICE = 0x4, + ICM_CHALLENGE_DEVICE = 0x5, + ICM_ADD_DEVICE_KEY = 0x6, + ICM_GET_ROUTE = 0xa, +}; + +enum icm_event_code { + ICM_EVENT_DEVICE_CONNECTED = 3, + ICM_EVENT_DEVICE_DISCONNECTED = 4, +}; + +struct icm_pkg_header { + u8 code; + u8 flags; + u8 packet_id; + u8 total_packets; +} __packed; + +#define ICM_FLAGS_ERROR BIT(0) +#define ICM_FLAGS_NO_KEY BIT(1) +#define ICM_FLAGS_SLEVEL_SHIFT 3 +#define ICM_FLAGS_SLEVEL_MASK GENMASK(4, 3) + +struct icm_pkg_driver_ready { + struct icm_pkg_header hdr; +} __packed; + +struct icm_pkg_driver_ready_response { + struct icm_pkg_header hdr; + u8 romver; + u8 ramver; + u16 security_level; +} __packed; + +/* Falcon Ridge & Alpine Ridge common messages */ + +struct icm_fr_pkg_get_topology { + struct icm_pkg_header hdr; +} __packed; + +#define ICM_GET_TOPOLOGY_PACKETS 14 + +struct icm_fr_pkg_get_topology_response { + struct icm_pkg_header hdr; + u32 route_lo; + u32 route_hi; + u8 first_data; + u8 second_data; + u8 drom_i2c_address_index; + u8 switch_index; + u32 reserved[2]; + u32 ports[16]; + u32 port_hop_info[16]; +} __packed; + +#define ICM_SWITCH_USED BIT(0) +#define ICM_SWITCH_UPSTREAM_PORT_MASK GENMASK(7, 1) +#define ICM_SWITCH_UPSTREAM_PORT_SHIFT 1 + +#define ICM_PORT_TYPE_MASK GENMASK(23, 0) +#define ICM_PORT_INDEX_SHIFT 24 +#define ICM_PORT_INDEX_MASK GENMASK(31, 24) + +struct icm_fr_event_device_connected { + struct icm_pkg_header hdr; + uuid_be ep_uuid; + u8 connection_key; + u8 connection_id; + u16 link_info; + u32 ep_name[55]; +} __packed; + +#define ICM_LINK_INFO_LINK_MASK 0x7 +#define ICM_LINK_INFO_DEPTH_SHIFT 4 +#define ICM_LINK_INFO_DEPTH_MASK GENMASK(7, 4) +#define ICM_LINK_INFO_APPROVED BIT(8) + +struct icm_fr_pkg_approve_device { + struct icm_pkg_header hdr; + uuid_be ep_uuid; + u8 connection_key; + u8 connection_id; + u16 reserved; +} __packed; + +struct icm_fr_event_device_disconnected { + struct icm_pkg_header hdr; + u16 reserved; + u16 link_info; +} __packed; + +struct icm_fr_pkg_add_device_key { + struct icm_pkg_header hdr; + uuid_be ep_uuid; + u8 connection_key; + u8 connection_id; + u16 reserved; + u32 key[8]; +} __packed; + +struct icm_fr_pkg_add_device_key_response { + struct icm_pkg_header hdr; + uuid_be ep_uuid; + u8 connection_key; + u8 connection_id; + u16 reserved; +} __packed; + +struct icm_fr_pkg_challenge_device { + struct icm_pkg_header hdr; + uuid_be ep_uuid; + u8 connection_key; + u8 connection_id; + u16 reserved; + u32 challenge[8]; +} __packed; + +struct icm_fr_pkg_challenge_device_response { + struct icm_pkg_header hdr; + uuid_be ep_uuid; + u8 connection_key; + u8 connection_id; + u16 reserved; + u32 challenge[8]; + u32 response[8]; +} __packed; + +/* Alpine Ridge only messages */ + +struct icm_ar_pkg_get_route { + struct icm_pkg_header hdr; + u16 reserved; + u16 link_info; +} __packed; + +struct icm_ar_pkg_get_route_response { + struct icm_pkg_header hdr; + u16 reserved; + u16 link_info; + u32 route_hi; + u32 route_lo; +} __packed; + #endif -- cgit v1.2.3 From e6b245ccd524441f462f1ca1fe726123dcedeeee Mon Sep 17 00:00:00 2001 From: Mika Westerberg Date: Tue, 6 Jun 2017 15:25:17 +0300 Subject: thunderbolt: Add support for host and device NVM firmware upgrade Starting from Intel Falcon Ridge the NVM firmware can be upgraded by using DMA configuration based mailbox commands. If we detect that the host or device (device support starts from Intel Alpine Ridge) has the DMA configuration based mailbox we expose NVM information to the userspace as two separate Linux NVMem devices: nvm_active and nvm_non_active. The former is read-only portion of the active NVM which firmware upgrade tools can be use to find out suitable NVM image if the device identification strings are not enough. The latter is write-only portion where the new NVM image is to be written by the userspace. It is up to the userspace to find out right NVM image (the kernel does very minimal validation). The ICM firmware itself authenticates the new NVM firmware and fails the operation if it is not what is expected. We also expose two new sysfs files per each switch: nvm_version and nvm_authenticate which can be used to read the active NVM version and start the upgrade process. We also introduce safe mode which is the mode a switch goes when it does not have properly authenticated firmware. In this mode the switch only accepts a couple of commands including flashing a new NVM firmware image and triggering power cycle. This code is based on the work done by Amir Levy and Michael Jamet. Signed-off-by: Michael Jamet Signed-off-by: Mika Westerberg Reviewed-by: Yehezkel Bernat Reviewed-by: Andy Shevchenko Signed-off-by: Andreas Noever Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-bus-thunderbolt | 26 + drivers/thunderbolt/Kconfig | 1 + drivers/thunderbolt/domain.c | 18 + drivers/thunderbolt/icm.c | 33 +- drivers/thunderbolt/nhi.h | 1 + drivers/thunderbolt/switch.c | 603 +++++++++++++++++++++++- drivers/thunderbolt/tb.c | 7 + drivers/thunderbolt/tb.h | 40 +- 8 files changed, 706 insertions(+), 23 deletions(-) (limited to 'Documentation/ABI') diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index 05b7f9a6431f..2a98149943ea 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -82,3 +82,29 @@ Description: This attribute contains unique_id string of this device. This is either read from hardware registers (UUID on newer hardware) or based on UID from the device DROM. Can be used to uniquely identify particular device. + +What: /sys/bus/thunderbolt/devices/.../nvm_version +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: If the device has upgradeable firmware the version + number is available here. Format: %x.%x, major.minor. + If the device is in safe mode reading the file returns + -ENODATA instead as the NVM version is not available. + +What: /sys/bus/thunderbolt/devices/.../nvm_authenticate +Date: Sep 2017 +KernelVersion: 4.13 +Contact: thunderbolt-software@lists.01.org +Description: When new NVM image is written to the non-active NVM + area (through non_activeX NVMem device), the + authentication procedure is started by writing 1 to + this file. If everything goes well, the device is + restarted with the new NVM firmware. If the image + verification fails an error code is returned instead. + + When read holds status of the last authentication + operation if an error occurred during the process. This + is directly the status value from the DMA configuration + based mailbox before the device is power cycled. Writing + 0 here clears the status. diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig index a9cc724985ad..f4869c38c7e4 100644 --- a/drivers/thunderbolt/Kconfig +++ b/drivers/thunderbolt/Kconfig @@ -6,6 +6,7 @@ menuconfig THUNDERBOLT select CRC32 select CRYPTO select CRYPTO_HASH + select NVMEM help Thunderbolt Controller driver. This driver is required if you want to hotplug Thunderbolt devices on Apple hardware or on PCs diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index f71b63e90016..9f2dcd48974d 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -426,6 +426,23 @@ err_free_tfm: return ret; } +/** + * tb_domain_disconnect_pcie_paths() - Disconnect all PCIe paths + * @tb: Domain whose PCIe paths to disconnect + * + * This needs to be called in preparation for NVM upgrade of the host + * controller. Makes sure all PCIe paths are disconnected. + * + * Return %0 on success and negative errno in case of error. + */ +int tb_domain_disconnect_pcie_paths(struct tb *tb) +{ + if (!tb->cm_ops->disconnect_pcie_paths) + return -EPERM; + + return tb->cm_ops->disconnect_pcie_paths(tb); +} + int tb_domain_init(void) { return bus_register(&tb_bus_type); @@ -435,4 +452,5 @@ void tb_domain_exit(void) { bus_unregister(&tb_bus_type); ida_destroy(&tb_domain_ida); + tb_switch_exit(); } diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index 0ffa4ec249ac..8ee340290219 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -54,6 +54,7 @@ * where ICM needs to be started manually * @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides * (only set when @upstream_port is not %NULL) + * @safe_mode: ICM is in safe mode * @is_supported: Checks if we can support ICM on this controller * @get_mode: Read and return the ICM firmware mode (optional) * @get_route: Find a route string for given switch @@ -65,6 +66,7 @@ struct icm { struct delayed_work rescan_work; struct pci_dev *upstream_port; int vnd_cap; + bool safe_mode; bool (*is_supported)(struct tb *tb); int (*get_mode)(struct tb *tb); int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route); @@ -852,6 +854,10 @@ static int icm_firmware_init(struct tb *tb) ret = icm->get_mode(tb); switch (ret) { + case NHI_FW_SAFE_MODE: + icm->safe_mode = true; + break; + case NHI_FW_CM_MODE: /* Ask ICM to accept all Thunderbolt devices */ nhi_mailbox_cmd(nhi, NHI_MAILBOX_ALLOW_ALL_DEVS, 0); @@ -879,12 +885,20 @@ static int icm_firmware_init(struct tb *tb) static int icm_driver_ready(struct tb *tb) { + struct icm *icm = tb_priv(tb); int ret; ret = icm_firmware_init(tb); if (ret) return ret; + if (icm->safe_mode) { + tb_info(tb, "Thunderbolt host controller is in safe mode.\n"); + tb_info(tb, "You need to update NVM firmware of the controller before it can be used.\n"); + tb_info(tb, "For latest updates check https://thunderbolttechnology.net/updates.\n"); + return 0; + } + return __icm_driver_ready(tb, &tb->security_level); } @@ -975,12 +989,23 @@ static void icm_complete(struct tb *tb) static int icm_start(struct tb *tb) { + struct icm *icm = tb_priv(tb); int ret; - tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); + if (icm->safe_mode) + tb->root_switch = tb_switch_alloc_safe_mode(tb, &tb->dev, 0); + else + tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); if (!tb->root_switch) return -ENODEV; + /* + * NVM upgrade has not been tested on Apple systems and they + * don't provide images publicly either. To be on the safe side + * prevent root switch NVM upgrade on Macs for now. + */ + tb->root_switch->no_nvm_upgrade = is_apple(); + ret = tb_switch_add(tb->root_switch); if (ret) tb_switch_put(tb->root_switch); @@ -998,6 +1023,11 @@ static void icm_stop(struct tb *tb) nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0); } +static int icm_disconnect_pcie_paths(struct tb *tb) +{ + return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0); +} + /* Falcon Ridge and Alpine Ridge */ static const struct tb_cm_ops icm_fr_ops = { .driver_ready = icm_driver_ready, @@ -1009,6 +1039,7 @@ static const struct tb_cm_ops icm_fr_ops = { .approve_switch = icm_fr_approve_switch, .add_switch_key = icm_fr_add_switch_key, .challenge_switch_key = icm_fr_challenge_switch_key, + .disconnect_pcie_paths = icm_disconnect_pcie_paths, }; struct tb *icm_probe(struct tb_nhi *nhi) diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h index 953864ae0ab3..5b5bb2c436be 100644 --- a/drivers/thunderbolt/nhi.h +++ b/drivers/thunderbolt/nhi.h @@ -155,6 +155,7 @@ enum nhi_fw_mode { enum nhi_mailbox_cmd { NHI_MAILBOX_SAVE_DEVS = 0x05, + NHI_MAILBOX_DISCONNECT_PCIE_PATHS = 0x06, NHI_MAILBOX_DRV_UNLOADS = 0x07, NHI_MAILBOX_ALLOW_ALL_DEVS = 0x23, }; diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 1524edf42ee8..ab3e8f410444 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -5,13 +5,395 @@ */ #include +#include +#include +#include #include +#include #include "tb.h" /* Switch authorization from userspace is serialized by this lock */ static DEFINE_MUTEX(switch_lock); +/* Switch NVM support */ + +#define NVM_DEVID 0x05 +#define NVM_VERSION 0x08 +#define NVM_CSS 0x10 +#define NVM_FLASH_SIZE 0x45 + +#define NVM_MIN_SIZE SZ_32K +#define NVM_MAX_SIZE SZ_512K + +static DEFINE_IDA(nvm_ida); + +struct nvm_auth_status { + struct list_head list; + uuid_be uuid; + u32 status; +}; + +/* + * Hold NVM authentication failure status per switch This information + * needs to stay around even when the switch gets power cycled so we + * keep it separately. + */ +static LIST_HEAD(nvm_auth_status_cache); +static DEFINE_MUTEX(nvm_auth_status_lock); + +static struct nvm_auth_status *__nvm_get_auth_status(const struct tb_switch *sw) +{ + struct nvm_auth_status *st; + + list_for_each_entry(st, &nvm_auth_status_cache, list) { + if (!uuid_be_cmp(st->uuid, *sw->uuid)) + return st; + } + + return NULL; +} + +static void nvm_get_auth_status(const struct tb_switch *sw, u32 *status) +{ + struct nvm_auth_status *st; + + mutex_lock(&nvm_auth_status_lock); + st = __nvm_get_auth_status(sw); + mutex_unlock(&nvm_auth_status_lock); + + *status = st ? st->status : 0; +} + +static void nvm_set_auth_status(const struct tb_switch *sw, u32 status) +{ + struct nvm_auth_status *st; + + if (WARN_ON(!sw->uuid)) + return; + + mutex_lock(&nvm_auth_status_lock); + st = __nvm_get_auth_status(sw); + + if (!st) { + st = kzalloc(sizeof(*st), GFP_KERNEL); + if (!st) + goto unlock; + + memcpy(&st->uuid, sw->uuid, sizeof(st->uuid)); + INIT_LIST_HEAD(&st->list); + list_add_tail(&st->list, &nvm_auth_status_cache); + } + + st->status = status; +unlock: + mutex_unlock(&nvm_auth_status_lock); +} + +static void nvm_clear_auth_status(const struct tb_switch *sw) +{ + struct nvm_auth_status *st; + + mutex_lock(&nvm_auth_status_lock); + st = __nvm_get_auth_status(sw); + if (st) { + list_del(&st->list); + kfree(st); + } + mutex_unlock(&nvm_auth_status_lock); +} + +static int nvm_validate_and_write(struct tb_switch *sw) +{ + unsigned int image_size, hdr_size; + const u8 *buf = sw->nvm->buf; + u16 ds_size; + int ret; + + if (!buf) + return -EINVAL; + + image_size = sw->nvm->buf_data_size; + if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE) + return -EINVAL; + + /* + * FARB pointer must point inside the image and must at least + * contain parts of the digital section we will be reading here. + */ + hdr_size = (*(u32 *)buf) & 0xffffff; + if (hdr_size + NVM_DEVID + 2 >= image_size) + return -EINVAL; + + /* Digital section start should be aligned to 4k page */ + if (!IS_ALIGNED(hdr_size, SZ_4K)) + return -EINVAL; + + /* + * Read digital section size and check that it also fits inside + * the image. + */ + ds_size = *(u16 *)(buf + hdr_size); + if (ds_size >= image_size) + return -EINVAL; + + if (!sw->safe_mode) { + u16 device_id; + + /* + * Make sure the device ID in the image matches the one + * we read from the switch config space. + */ + device_id = *(u16 *)(buf + hdr_size + NVM_DEVID); + if (device_id != sw->config.device_id) + return -EINVAL; + + if (sw->generation < 3) { + /* Write CSS headers first */ + ret = dma_port_flash_write(sw->dma_port, + DMA_PORT_CSS_ADDRESS, buf + NVM_CSS, + DMA_PORT_CSS_MAX_SIZE); + if (ret) + return ret; + } + + /* Skip headers in the image */ + buf += hdr_size; + image_size -= hdr_size; + } + + return dma_port_flash_write(sw->dma_port, 0, buf, image_size); +} + +static int nvm_authenticate_host(struct tb_switch *sw) +{ + int ret; + + /* + * Root switch NVM upgrade requires that we disconnect the + * existing PCIe paths first (in case it is not in safe mode + * already). + */ + if (!sw->safe_mode) { + ret = tb_domain_disconnect_pcie_paths(sw->tb); + if (ret) + return ret; + /* + * The host controller goes away pretty soon after this if + * everything goes well so getting timeout is expected. + */ + ret = dma_port_flash_update_auth(sw->dma_port); + return ret == -ETIMEDOUT ? 0 : ret; + } + + /* + * From safe mode we can get out by just power cycling the + * switch. + */ + dma_port_power_cycle(sw->dma_port); + return 0; +} + +static int nvm_authenticate_device(struct tb_switch *sw) +{ + int ret, retries = 10; + + ret = dma_port_flash_update_auth(sw->dma_port); + if (ret && ret != -ETIMEDOUT) + return ret; + + /* + * Poll here for the authentication status. It takes some time + * for the device to respond (we get timeout for a while). Once + * we get response the device needs to be power cycled in order + * to the new NVM to be taken into use. + */ + do { + u32 status; + + ret = dma_port_flash_update_auth_status(sw->dma_port, &status); + if (ret < 0 && ret != -ETIMEDOUT) + return ret; + if (ret > 0) { + if (status) { + tb_sw_warn(sw, "failed to authenticate NVM\n"); + nvm_set_auth_status(sw, status); + } + + tb_sw_info(sw, "power cycling the switch now\n"); + dma_port_power_cycle(sw->dma_port); + return 0; + } + + msleep(500); + } while (--retries); + + return -ETIMEDOUT; +} + +static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, + size_t bytes) +{ + struct tb_switch *sw = priv; + + return dma_port_flash_read(sw->dma_port, offset, val, bytes); +} + +static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, + size_t bytes) +{ + struct tb_switch *sw = priv; + int ret = 0; + + if (mutex_lock_interruptible(&switch_lock)) + return -ERESTARTSYS; + + /* + * Since writing the NVM image might require some special steps, + * for example when CSS headers are written, we cache the image + * locally here and handle the special cases when the user asks + * us to authenticate the image. + */ + if (!sw->nvm->buf) { + sw->nvm->buf = vmalloc(NVM_MAX_SIZE); + if (!sw->nvm->buf) { + ret = -ENOMEM; + goto unlock; + } + } + + sw->nvm->buf_data_size = offset + bytes; + memcpy(sw->nvm->buf + offset, val, bytes); + +unlock: + mutex_unlock(&switch_lock); + + return ret; +} + +static struct nvmem_device *register_nvmem(struct tb_switch *sw, int id, + size_t size, bool active) +{ + struct nvmem_config config; + + memset(&config, 0, sizeof(config)); + + if (active) { + config.name = "nvm_active"; + config.reg_read = tb_switch_nvm_read; + } else { + config.name = "nvm_non_active"; + config.reg_write = tb_switch_nvm_write; + } + + config.id = id; + config.stride = 4; + config.word_size = 4; + config.size = size; + config.dev = &sw->dev; + config.owner = THIS_MODULE; + config.root_only = true; + config.priv = sw; + + return nvmem_register(&config); +} + +static int tb_switch_nvm_add(struct tb_switch *sw) +{ + struct nvmem_device *nvm_dev; + struct tb_switch_nvm *nvm; + u32 val; + int ret; + + if (!sw->dma_port) + return 0; + + nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); + if (!nvm) + return -ENOMEM; + + nvm->id = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL); + + /* + * If the switch is in safe-mode the only accessible portion of + * the NVM is the non-active one where userspace is expected to + * write new functional NVM. + */ + if (!sw->safe_mode) { + u32 nvm_size, hdr_size; + + ret = dma_port_flash_read(sw->dma_port, NVM_FLASH_SIZE, &val, + sizeof(val)); + if (ret) + goto err_ida; + + hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K; + nvm_size = (SZ_1M << (val & 7)) / 8; + nvm_size = (nvm_size - hdr_size) / 2; + + ret = dma_port_flash_read(sw->dma_port, NVM_VERSION, &val, + sizeof(val)); + if (ret) + goto err_ida; + + nvm->major = val >> 16; + nvm->minor = val >> 8; + + nvm_dev = register_nvmem(sw, nvm->id, nvm_size, true); + if (IS_ERR(nvm_dev)) { + ret = PTR_ERR(nvm_dev); + goto err_ida; + } + nvm->active = nvm_dev; + } + + nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false); + if (IS_ERR(nvm_dev)) { + ret = PTR_ERR(nvm_dev); + goto err_nvm_active; + } + nvm->non_active = nvm_dev; + + mutex_lock(&switch_lock); + sw->nvm = nvm; + mutex_unlock(&switch_lock); + + return 0; + +err_nvm_active: + if (nvm->active) + nvmem_unregister(nvm->active); +err_ida: + ida_simple_remove(&nvm_ida, nvm->id); + kfree(nvm); + + return ret; +} + +static void tb_switch_nvm_remove(struct tb_switch *sw) +{ + struct tb_switch_nvm *nvm; + + mutex_lock(&switch_lock); + nvm = sw->nvm; + sw->nvm = NULL; + mutex_unlock(&switch_lock); + + if (!nvm) + return; + + /* Remove authentication status in case the switch is unplugged */ + if (!nvm->authenticating) + nvm_clear_auth_status(sw); + + nvmem_unregister(nvm->non_active); + if (nvm->active) + nvmem_unregister(nvm->active); + ida_simple_remove(&nvm_ida, nvm->id); + vfree(nvm->buf); + kfree(nvm); +} + /* port utility functions */ static const char *tb_port_type(struct tb_regs_port_header *port) @@ -448,6 +830,83 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RW(key); +static ssize_t nvm_authenticate_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + u32 status; + + nvm_get_auth_status(sw, &status); + return sprintf(buf, "%#x\n", status); +} + +static ssize_t nvm_authenticate_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct tb_switch *sw = tb_to_switch(dev); + bool val; + int ret; + + if (mutex_lock_interruptible(&switch_lock)) + return -ERESTARTSYS; + + /* If NVMem devices are not yet added */ + if (!sw->nvm) { + ret = -EAGAIN; + goto exit_unlock; + } + + ret = kstrtobool(buf, &val); + if (ret) + goto exit_unlock; + + /* Always clear the authentication status */ + nvm_clear_auth_status(sw); + + if (val) { + ret = nvm_validate_and_write(sw); + if (ret) + goto exit_unlock; + + sw->nvm->authenticating = true; + + if (!tb_route(sw)) + ret = nvm_authenticate_host(sw); + else + ret = nvm_authenticate_device(sw); + } + +exit_unlock: + mutex_unlock(&switch_lock); + + if (ret) + return ret; + return count; +} +static DEVICE_ATTR_RW(nvm_authenticate); + +static ssize_t nvm_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + int ret; + + if (mutex_lock_interruptible(&switch_lock)) + return -ERESTARTSYS; + + if (sw->safe_mode) + ret = -ENODATA; + else if (!sw->nvm) + ret = -EAGAIN; + else + ret = sprintf(buf, "%x.%x\n", sw->nvm->major, sw->nvm->minor); + + mutex_unlock(&switch_lock); + + return ret; +} +static DEVICE_ATTR_RO(nvm_version); + static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -480,6 +939,8 @@ static struct attribute *switch_attrs[] = { &dev_attr_device.attr, &dev_attr_device_name.attr, &dev_attr_key.attr, + &dev_attr_nvm_authenticate.attr, + &dev_attr_nvm_version.attr, &dev_attr_vendor.attr, &dev_attr_vendor_name.attr, &dev_attr_unique_id.attr, @@ -498,9 +959,14 @@ static umode_t switch_attr_is_visible(struct kobject *kobj, sw->security_level == TB_SECURITY_SECURE) return attr->mode; return 0; + } else if (attr == &dev_attr_nvm_authenticate.attr || + attr == &dev_attr_nvm_version.attr) { + if (sw->dma_port) + return attr->mode; + return 0; } - return attr->mode; + return sw->safe_mode ? 0 : attr->mode; } static struct attribute_group switch_group = { @@ -651,6 +1117,45 @@ err_free_sw_ports: return NULL; } +/** + * tb_switch_alloc_safe_mode() - allocate a switch that is in safe mode + * @tb: Pointer to the owning domain + * @parent: Parent device for this switch + * @route: Route string for this switch + * + * This creates a switch in safe mode. This means the switch pretty much + * lacks all capabilities except DMA configuration port before it is + * flashed with a valid NVM firmware. + * + * The returned switch must be released by calling tb_switch_put(). + * + * Return: Pointer to the allocated switch or %NULL in case of failure + */ +struct tb_switch * +tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route) +{ + struct tb_switch *sw; + + sw = kzalloc(sizeof(*sw), GFP_KERNEL); + if (!sw) + return NULL; + + sw->tb = tb; + sw->config.depth = tb_route_length(route); + sw->config.route_hi = upper_32_bits(route); + sw->config.route_lo = lower_32_bits(route); + sw->safe_mode = true; + + device_initialize(&sw->dev); + sw->dev.parent = parent; + sw->dev.bus = &tb_bus_type; + sw->dev.type = &tb_switch_type; + sw->dev.groups = switch_groups; + dev_set_name(&sw->dev, "%u-%llx", tb->index, tb_route(sw)); + + return sw; +} + /** * tb_switch_configure() - Uploads configuration to the switch * @sw: Switch to configure @@ -717,8 +1222,11 @@ static void tb_switch_set_uuid(struct tb_switch *sw) sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL); } -static void tb_switch_add_dma_port(struct tb_switch *sw) +static int tb_switch_add_dma_port(struct tb_switch *sw) { + u32 status; + int ret; + switch (sw->generation) { case 3: break; @@ -726,14 +1234,49 @@ static void tb_switch_add_dma_port(struct tb_switch *sw) case 2: /* Only root switch can be upgraded */ if (tb_route(sw)) - return; + return 0; break; default: - return; + /* + * DMA port is the only thing available when the switch + * is in safe mode. + */ + if (!sw->safe_mode) + return 0; + break; } + if (sw->no_nvm_upgrade) + return 0; + sw->dma_port = dma_port_alloc(sw); + if (!sw->dma_port) + return 0; + + /* + * Check status of the previous flash authentication. If there + * is one we need to power cycle the switch in any case to make + * it functional again. + */ + ret = dma_port_flash_update_auth_status(sw->dma_port, &status); + if (ret <= 0) + return ret; + + if (status) { + tb_sw_info(sw, "switch flash authentication failed\n"); + tb_switch_set_uuid(sw); + nvm_set_auth_status(sw, status); + } + + tb_sw_info(sw, "power cycling the switch now\n"); + dma_port_power_cycle(sw->dma_port); + + /* + * We return error here which causes the switch adding failure. + * It should appear back after power cycle is complete. + */ + return -ESHUTDOWN; } /** @@ -759,29 +1302,41 @@ int tb_switch_add(struct tb_switch *sw) * to the userspace. NVM can be accessed through DMA * configuration based mailbox. */ - tb_switch_add_dma_port(sw); - - /* read drom */ - ret = tb_drom_read(sw); - if (ret) { - tb_sw_warn(sw, "tb_eeprom_read_rom failed\n"); + ret = tb_switch_add_dma_port(sw); + if (ret) return ret; - } - tb_sw_info(sw, "uid: %#llx\n", sw->uid); - tb_switch_set_uuid(sw); + if (!sw->safe_mode) { + /* read drom */ + ret = tb_drom_read(sw); + if (ret) { + tb_sw_warn(sw, "tb_eeprom_read_rom failed\n"); + return ret; + } + tb_sw_info(sw, "uid: %#llx\n", sw->uid); - for (i = 0; i <= sw->config.max_port_number; i++) { - if (sw->ports[i].disabled) { - tb_port_info(&sw->ports[i], "disabled by eeprom\n"); - continue; + tb_switch_set_uuid(sw); + + for (i = 0; i <= sw->config.max_port_number; i++) { + if (sw->ports[i].disabled) { + tb_port_info(&sw->ports[i], "disabled by eeprom\n"); + continue; + } + ret = tb_init_port(&sw->ports[i]); + if (ret) + return ret; } - ret = tb_init_port(&sw->ports[i]); - if (ret) - return ret; } - return device_add(&sw->dev); + ret = device_add(&sw->dev); + if (ret) + return ret; + + ret = tb_switch_nvm_add(sw); + if (ret) + device_del(&sw->dev); + + return ret; } /** @@ -808,6 +1363,7 @@ void tb_switch_remove(struct tb_switch *sw) if (!sw->is_unplugged) tb_plug_events_active(sw, false); + tb_switch_nvm_remove(sw); device_unregister(&sw->dev); } @@ -976,3 +1532,8 @@ struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid) return NULL; } + +void tb_switch_exit(void) +{ + ida_destroy(&nvm_ida); +} diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index ad2304bad592..1b02ca0b6129 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -369,6 +369,13 @@ static int tb_start(struct tb *tb) if (!tb->root_switch) return -ENOMEM; + /* + * ICM firmware upgrade needs running firmware and in native + * mode that is not available so disable firmware upgrade of the + * root switch. + */ + tb->root_switch->no_nvm_upgrade = true; + ret = tb_switch_configure(tb->root_switch); if (ret) { tb_switch_put(tb->root_switch); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index a998b3a251d5..3d9f64676e58 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -7,6 +7,7 @@ #ifndef TB_H_ #define TB_H_ +#include #include #include @@ -14,6 +15,30 @@ #include "ctl.h" #include "dma_port.h" +/** + * struct tb_switch_nvm - Structure holding switch NVM information + * @major: Major version number of the active NVM portion + * @minor: Minor version number of the active NVM portion + * @id: Identifier used with both NVM portions + * @active: Active portion NVMem device + * @non_active: Non-active portion NVMem device + * @buf: Buffer where the NVM image is stored before it is written to + * the actual NVM flash device + * @buf_data_size: Number of bytes actually consumed by the new NVM + * image + * @authenticating: The switch is authenticating the new NVM + */ +struct tb_switch_nvm { + u8 major; + u8 minor; + int id; + struct nvmem_device *active; + struct nvmem_device *non_active; + void *buf; + size_t buf_data_size; + bool authenticating; +}; + /** * enum tb_security_level - Thunderbolt security level * @TB_SECURITY_NONE: No security, legacy mode @@ -39,7 +64,8 @@ enum tb_security_level { * @ports: Ports in this switch * @dma_port: If the switch has port supporting DMA configuration based * mailbox this will hold the pointer to that (%NULL - * otherwise). + * otherwise). If set it also means the switch has + * upgradeable NVM. * @tb: Pointer to the domain the switch belongs to * @uid: Unique ID of the switch * @uuid: UUID of the switch (or %NULL if not supported) @@ -51,6 +77,9 @@ enum tb_security_level { * @cap_plug_events: Offset to the plug events capability (%0 if not found) * @is_unplugged: The switch is going away * @drom: DROM of the switch (%NULL if not found) + * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) + * @no_nvm_upgrade: Prevent NVM upgrade of this switch + * @safe_mode: The switch is in safe-mode * @authorized: Whether the switch is authorized by user or policy * @work: Work used to automatically authorize a switch * @security_level: Switch supported security level @@ -81,6 +110,9 @@ struct tb_switch { int cap_plug_events; bool is_unplugged; u8 *drom; + struct tb_switch_nvm *nvm; + bool no_nvm_upgrade; + bool safe_mode; unsigned int authorized; struct work_struct work; enum tb_security_level security_level; @@ -172,6 +204,7 @@ struct tb_path { * @approve_switch: Approve switch * @add_switch_key: Add key to switch * @challenge_switch_key: Challenge switch using key + * @disconnect_pcie_paths: Disconnects PCIe paths before NVM update */ struct tb_cm_ops { int (*driver_ready)(struct tb *tb); @@ -187,6 +220,7 @@ struct tb_cm_ops { int (*add_switch_key)(struct tb *tb, struct tb_switch *sw); int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, const u8 *challenge, u8 *response); + int (*disconnect_pcie_paths)(struct tb *tb); }; /** @@ -340,6 +374,7 @@ extern struct device_type tb_switch_type; int tb_domain_init(void); void tb_domain_exit(void); +void tb_switch_exit(void); struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize); int tb_domain_add(struct tb *tb); @@ -351,6 +386,7 @@ void tb_domain_complete(struct tb *tb); int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw); int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw); int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw); +int tb_domain_disconnect_pcie_paths(struct tb *tb); static inline void tb_domain_put(struct tb *tb) { @@ -359,6 +395,8 @@ static inline void tb_domain_put(struct tb *tb) struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, u64 route); +struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb, + struct device *parent, u64 route); int tb_switch_configure(struct tb_switch *sw); int tb_switch_add(struct tb_switch *sw); void tb_switch_remove(struct tb_switch *sw); -- cgit v1.2.3 From a7ec9371dd11f0bc705a703fa0e5c1c11339f1a6 Mon Sep 17 00:00:00 2001 From: Christopher Bostic Date: Tue, 6 Jun 2017 16:08:53 -0500 Subject: drivers/fsi: Document FSI master sysfs files in ABI Add info for sysfs scan file in Documentaiton ABI/testing Signed-off-by: Christopher Bostic Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-bus-fsi | 38 +++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 Documentation/ABI/testing/sysfs-bus-fsi (limited to 'Documentation/ABI') diff --git a/Documentation/ABI/testing/sysfs-bus-fsi b/Documentation/ABI/testing/sysfs-bus-fsi new file mode 100644 index 000000000000..57c806350d6c --- /dev/null +++ b/Documentation/ABI/testing/sysfs-bus-fsi @@ -0,0 +1,38 @@ +What: /sys/bus/platform/devices/fsi-master/rescan +Date: May 2017 +KernelVersion: 4.12 +Contact: cbostic@linux.vnet.ibm.com +Description: + Initiates a FSI master scan for all connected slave devices + on its links. + +What: /sys/bus/platform/devices/fsi-master/break +Date: May 2017 +KernelVersion: 4.12 +Contact: cbostic@linux.vnet.ibm.com +Description: + Sends an FSI BREAK command on a master's communication + link to any connnected slaves. A BREAK resets connected + device's logic and preps it to receive further commands + from the master. + +What: /sys/bus/platform/devices/fsi-master/slave@00:00/term +Date: May 2017 +KernelVersion: 4.12 +Contact: cbostic@linux.vnet.ibm.com +Description: + Sends an FSI terminate command from the master to its + connected slave. A terminate resets the slave's state machines + that control access to the internally connected engines. In + addition the slave freezes its internal error register for + debugging purposes. This command is also needed to abort any + ongoing operation in case of an expired 'Master Time Out' + timer. + +What: /sys/bus/platform/devices/fsi-master/slave@00:00/raw +Date: May 2017 +KernelVersion: 4.12 +Contact: cbostic@linux.vnet.ibm.com +Description: + Provides a means of reading/writing a 32 bit value from/to a + specified FSI bus address. -- cgit v1.2.3