ARM-based Server Processor SoC

Overview

This chip is a follow-up to the customer's previous generation ARM-based processor SoC, with an increased number of ARM cores and built-in accelerators, and superior I/O capabilities. It is manufactured on TSMC 7nm FinFet process using one of our partner's leading edge PCIe PHY.

PLDA Inside

The processor SoC uses PLDA XpressRICH5 controller IP as the host PCIe interface. PLDA IP operates as a PCIe 4.0 Gen4 x16 root port interface. 

Why PLDA?

PLDA IP was selected based on the following criteria:

  • Availability of Rx Stream mode, allowing Rx buffer implementation in the application logic for custom credit management
  • Availability of specific RAS features extending beyond the scope of the PCIe Specification
  • Ability to deliver on-time, quality IP customization services
  • Proven integration with the PHY IP selected (Avago PHY)

Deep Learning SoC for AI training acceleration

Overview

The Deep Learning Processing Unit SoC is the building block of an AI training system intended to accelerate neural net training by 1000x vs. GPU based solutions. The chip is manufactured using TSMC 16nm FinFET process.

PLDA Inside

The SoC uses PLDA XpressRICH3-AXI controller IP as the PCIe interface to the host CPU subsystem. The IP operates as a Gen3 x16 endpoint interface. The IP’s configurable AXI interface allowed a seamless connection to the SoC’s AXI4 fabric.

Why PLDA?

PLDA IP was selected based on the following criteria:

  • Availability of an AXI interconnect with built-in DMA allowing for AXI to AXI transfers
  • Strict support for AMBA AXI ordering rules to avoid deadlocks
  • Overall performance achieved across the AXI interfaces
  • Proven integration with the PHY IP selected (PHY partner Analog Bits)

Network Processor/Smart NIC SoC

Overview

This Network Processor and Smart NIC SoC is designed to accelerate firewall and VPN functions. It supports multiple 100 gigabits of Ethernet bandwidth and L2 switching.  The chip is manufactured using TSMC 28nm HPC process.

PLDA Inside

The SoC uses PLDA XpressRICH4 controller IP as the PCIe interface to the host CPU subsystem. The IP operates as a Gen4 x8 endpoint interface with SR-IOV.

Why PLDA?

PLDA IP was selected based on the following criteria:

  • Robust and flexible support for Virtualization via SR-IOV
  • Full support for Synopsys PCIe PHY IP with synthesizable top-level
  • Quality and responsiveness of PLDA technical support (based on earlier successful projects)

NVMe-oF Bridge ASIC

Overview

The 100G NVMe-oF bridge ASIC is designed specifically to enable NVMe JBOFs to connect directly to an NVMe-oF network, via a standard x16 PCIe slot in a JBOF.  The chip is manufactured on TSMC 28 HPC process.

PLDA Inside

The SoC uses PLDA XpressRICH3 controller IP as the PCIe interface to the host CPU subsystem. The IP operates as a Gen3 x16 endpoint interface, or can be configured as 2 x Gen3 x8 connections via port bifurcation.

Why PLDA?

PLDA IP was selected based on the following criteria:

  • Proven integration with Synopsys PCIe PHY
  • Availability of port bifurcation wrappers
  • Maturity of PLDA XpressRICH3 Controller IP

NVMe Controller SoC Platform

Overview

EpoStar’ PCIe-NVMe SSD controller platform (Libra) is compliant with NVM Express 1.2 specification and targets both enterprise and client SSD markets. It features EpoStar’s Meissa NVMe controller core and Alcyone LDPC error correction core to enable low-power and cost-effective SSD controllers that support 1x/1y/1z MLC/TLC and 3D NAND. For more information about the Libra platform, visit http://www.epostar-elec.com/products_Platform.html.

PLDA Inside

The platform uses PLDA XpressRICH3-AXI controller IP as the PCIe interface to the host CPU subsystem. The IP operates as a Gen3 x4 endpoint interface with an option to support 8 lanes. The IP’s configurable AXI interface allowed a seamless connection to the SoC’s AXI fabric.

Why PLDA?

Epostar selected PLDA IP based on the following criteria:

  • Flexible, user-configurable AXI interconnect
  • Handling of AMBA AXI ordering rules inside the IP vs. in application logic
  • Performance achieved across the AXI interfaces
  • Top-notch technical support from a dedicated and knowledgeable support team
  • Proven integration with the PHY IP selected (PHY partner GUC)

32G Fiber Channel Host Bus Adapter SoC

Overview

This 256G (4x64G) Fiber Channel HBA SoC is engineered for modern networked storage systems. The chip is manufactured using TSMC 16nm FinFET process.

PLDA Inside

The SoC uses PLDA XpressRICH4 controller IP as the PCIe interface to the host CPU subsystem. The IP operates as a Gen4 x16 endpoint interface. The IP is used in Transaction Layer Bypass mode allowing for ultra low latency and the reuse of a proprietary transaction layer.

Why PLDA?

PLDA IP was selected based on the following criteria:

  • Availability of a TL bypass mode allowing customer to implement its own Transaction Layer and ensure existing driver and firmware compatibility
  • Support for SR-IOV with a large number of virtualized functions
  • Flexibility of the PIPE interface in term of supported width and frequency
  • Flexibility in the partitioning and implementation of the Receive/Transmit buffers for SR-IOV support

PCIe-over-HDBaseT Adapter

Overview

The customer developed a silicon chip for the tunneling of PCIe traffic over HDBaseT, providing a whole-car backbone network infrastructure, for optimized sensor fusion, ADAS and infotainment. HDBaseT supports multi-gigabit transfers over up to 50 feet of unshielded twisted pair cable. The chip is manufactured on TSMC 28HPC process.

PLDA Inside

The ASIC uses PLDA XpressRICH3 controller IP that can be statically configured as a Gen3 x4 upstream or downstream port, depending on where the chip is used in the system. 

Why PLDA?

The customer selected PLDA IP based on the following criteria:

  • Availability of design expertise during the architecture phase
  • Superior IP metrics (gate count, latency) compared to competing solutions
  • Proven integration with the PHY IP selected (Cadence PHY)
  • Proven track record enabling PCIe bridging and switching designs

Wireless VR Accelerator ASIC

Overview

This customer developed an ASIC for accelerating a proprietary software video CODEC in hardware in order to reduce the latency and latency jitter to the Virtual Reality headsets. The chip is manufactured on SMIC 40LL process.

PLDA Inside

The ASIC uses PLDA XpressSWITCH switch IP with one Gen3 x4 upstream port to the host CPU and two downstream ports. One downstream port connects to an embedded Gen3 x4 endpoint (using PLDA XpressRICH3-AXI controller IP) responsible for the CODEC function. The second downstream port connects off-chip to a PCIe Gen3 x2 radio transmitter endpoint responsible for transferring encoded streams to the VR headsets.

Why PLDA?

The customer selected PLDA XpressSWITCH and XpressRICH3-AXI for the following reasons:

  • Proven integration with the PHY IP selected (PHY partner M31)
  • Availability of a PCI-SIG compliant switch IP with low latency
  • Ability to prototype the entire chip on FPGA, including the switch IP
  • Flexible/configurable AXI interfaces and integrated DMA in XpressRICH3-AXI for easy application development
  • Availability of on-site training to help with integration and silicon bring-up

Test Platform for NVMe Add-in cards

Overview

This customer developed a platform for testing multiple NVMe SSDs simultaneously. At the heart of the platform is a customer-developed PCIe adapter card based on Intel/Altera Arria 10 FPGA, connecting upstream to a standard Intel host motherboard, and connecting downstream to multiple NVMe SSDs and multiple SATA SSDs.

This setup provided significant flexibility and cost savings for the customer compared to using separate PCIe and SATA controllers.

PLDA Inside

The FPGA implements multiple instances of PLDA XpressSWITCH PCIe multi-port transparent Switch IP configured with one upstream PCIe Gen3 x8 port and two downstream PCIe Gen3 x4 ports.

Why PLDA?

The customer selected PLDA XpressSWITCH IP based on the following criteria:

  • Availability of a production-ready PCIe Switch IP for reduced BoM and design complexity
  • Full support for Arria 10 FPGA with no compromise on throughput
  • Availability of customization services