• PLDA and Samtec Demonstrate PCIe 4.0 Communication over Twinax Cables Allowing Full 16GT/s PCIe 4.0 Bandwidth at Minimal Manufacturing Cost

    PLDA, the industry leader in PCI Express® controller IP solutions and Samtec, a privately held $800MM global manufacturer of a broad line of electronic interconnect solutions, today announced a demo of their combined PCIe 4.0 solution that delivers full PCIe 4.0 bandwidth (16 GT/s) over copper or optical fiber at minimal cost.

    The solution is based on a PLDA PCIe 4.0 acquisition board running PLDA’s PCIe 4.0 controller IP combined with Samtec’s FireFly™ Micro Flyover System™. This demo suggests a solution to overcoming the inherent limitations of maintaining PCIe 4.0 performance over long distances without changing the motherboard technology or using costly retimers.

  • PLDA Offers XpressRICH PCIe and CCIX Controller IP Through SiFive DesignShare Program

    SAN MATEO, Calif., Nov. 26, 2018 -- SiFive, the leading provider of commercial RISC-V processor IP, today announced that PLDA, the leader in PCIe and CCIX Controller IP solutions has joined the DesignShare™ ecosystem. Through this collaboration, PLDA will provide its rich suite of XpressRICH core IP that enables high-speed connectivity solutions for many applications like enterprise storage, networking, high-performance computing and artificial intelligence to name a few.

  • PCIe 4.0 Communication over Optical Fiber and TWINAX Cables

    As cloud computing and deep learning accelerators drive faster advances in the PCIe roadmap, existing hardware designs cannot support the higher speed signals over the same distances. The PCIe 1.0 specification allowed signals to travel as much as 20 inches over traces in mainstream FR4 boards, even while passing through two connectors. In contrast, today’s quicker 16 GT/s PCIe 4.0 signals will peter out in under six inches and without going over any connectors.

     

     

  • Gen-Z Primer for Early Adopters

    Computer systems as we know them have been built on the paradigm that the CPU-memory pair is fast while network and storage are slow. Over the years, these components developed their own language and interfaces that require layers of software to translate memory commands into network and storage commands and vice versa.

    Until now, the speed of the CPU-memory pair relative to network and storage I/O was such that these software layers had minimal impact on system performance.

    However, with Moore’s law in full effect, network and storage technologies are quickly catching up with CPU-memory speeds and the burden of generations of software layers now becomes significant.