Enabling Composable Platforms with On-Chip PCIe Switching, PCIe-over-Cable

1. Introduction

Modern enterprise workloads in AI and data analytics are driving the need for new compute and storage architectures in IT infrastructures. The growing use of accelerators (GPUs, FPGAs, custom ASICs) and emerging memory technologies (3D XPoint, Storage Class Memory, Persistent Memory), and the need to better distribute and utilize these resources are fueling the transition to composable/disaggregated infrastructures (CDI) in data centers.

PCIe 5.0 vs. Emerging Protocol Standards - Will PCIe 5.0 Become Ubiquitous in Tomorrow’s SoCs?

1. Introduction

Over the last 3 years, a number of protocol standards have emerged, aiming to address the growing demand for higher data throughput and more efficient data movement. While CCIX, Gen-Z, and OpenCAPI are relative newcomers, PCIe has been around for almost 2 decades. With the imminent release of version 5.0 of the PCIe Specification, SoC designers have a variety of options for supporting bandwidths in excess of 400 Gbit/s while improving overall communication efficiency.

In this article, we look into the various protocols by providing a brief history, a quick technical comparison, and the latest deployment status. We also offer our perspective on the evolution of these protocols.

Bulletproofing PCIe-based SoCs with Advanced Reliability, Availability, Serviceability (RAS) Mechanisms

1. Introduction

As silicon manufacturing process nodes keep shrinking and transistors get smaller, System-on-Chip (SoC) are increasingly subject to failures due to changing external conditions such as temperature, EMI, power surges, Hot Plug events, etc.

The transition to PCIe 4.0 and 5.0 with increasing PCIe signaling speeds (16GT/s and 32GT/s) also augments the risk of errors due to tightening timing budgets inside the SoC and electrical issues outside the SoC (e.g. crosstalk, line attenuation, jitter, etc.).

Gen-Z Primer for Early Adopters

Computer systems as we know them have been built on the paradigm that the CPU-memory pair is fast while network and storage are slow. Over the years, these components developed their own language and interfaces that require layers of software to translate memory commands into network and storage commands and vice versa.

Until now, the speed of the CPU-memory pair relative to network and storage I/O was such that these software layers had minimal impact on system performance.

However, with Moore’s law in full effect, network and storage technologies are quickly catching up with CPU-memory speeds and the burden of generations of software layers now becomes significant.

PCIe Switch: Webinar - Demonstration

PLDA’s PCIe Switch Solution provides a customizable and scalable switch design tailored for the needs of the PCIe switch marketplace. The PCIe® architecture has become a standard within the storage data centers due to the advantages it delivers in latency and performance. PCIe switches manage dataflow within a device, delivering the flexibility, scalability and configurability required to connect a large pool of drives or networks.

In that presentation, Rabih Eid, Support Manager, explains how to take advantage of PCIe Embedded Switch technology, reducing BOM, reusing existing designs, decreasing system power consumption and improving overall performance compared to an ASSP device.

Visit PLDA's PCIe Switch IP page for more information.

White Paper: Methods to Fine-Tune Power Consumption of PCIe devices

 

Methods to Fine-Tune Power Consumption of PCIe devices

A basic paradox of electronic evolution is the desire to enable the execution of more functions while consuming less power and silicon area. For PCIe® applications, this goal is not a new one. A PCI power management specification has been available since 1997, and PCI Express® has featured native power management since its initial release in 2002. In addition, there has always been a recognized need for low power in the mobile market.

Is the Market Ready To Conquer PCIe 4.0 Challenges?

PLDA, the company that designs and sells intellectual property (IP) cores and prototyping tools for ASICs and FPGAs, has optimized its ASIC intellectual property (IP) cores for the next generation of the ubiquitous and general purpose PCI Express® I/O specification, 4.0.  PLDA’s proven 3.0 architecture enables easy migration to PCIe 4.0, with no interface changes necessary, and preserves existing behavior for seamless integration.

Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce Costs

By Philippe Legros, Product Design Architect, PLDA

Introduction

While server virtualization is being widely deployed in an effort to reduce costs and optimize data center resource usage, an additional key area where virtualization has an opportunity to shine is in the area of I/O performance and its role in enabling more efficient application execution.

 

Pages