Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

A chip industry group, which encompasses major stakeholders such as Intel, AMD, Arm, TSMC and Samsung, today announced the UCIe chiplet interconnect as well as a new consortium created to support this new standard. The goal of both is to increase innovation to foster an open ecosystem where chiplets from different vendors are interoperable.

What are chiplets?

Over the last five years or so, due to the emergence of advanced 2.5D and 3D packaging, chiplets have emerged as a new level of abstraction in chip design. Chiplets are pieces of silicon that by themselves don’t form a complete system. Rather, chiplets represent the physical implementation of a single or a small collection of IP blocks. These can then be connected to other chiplets to form a complete system. 

This design paradigm allows for IP re-use across different products and also results in lower cost because manufacturing yield is higher due to the smaller individual chiplets  manufactured. Chiplets also provide  greater flexibility, allowing the use of different manufacturing processes for each chiplet, whereas a traditional SoC (system-on-a-chip) by necessity would be manufactured using a single process.

While various companies, including Intel, AMD and others, have already brought products to market based on this design approach, the holy grail for the industry is a multivendor chiplet ecosystem, where system designers can pick and choose their preferred chiplets across various vendors. However, in order for this to work,  chiplets must be interoperable, which means that they can actually communicate and share data. To date, each company has developed its interconnect, although there have been efforts at standardization, such as Intel’s AIB (advanced interface bus) physical layer. CXL (compute express link) has emerged as the leading chip-to-chip protocol layer.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

UCIe interconnect and consortium

Today, the industry consortium that includes AMD, Arm, Google Cloud, Intel, Meta, Qualcomm, Samsung and TSMC announced UCIe (Universal Chiplet Interconnect Express) as a new die-to-die interconnect to provide an open, multivendor chiplet ecosystem. More specifically, the ratified UCIe 1.0 specification covers the physical I/O layer, die-to-die protocols and a software stack that leverages the existing PCIe and CXL industry interconnect standards (although other protocols could also be used in principle). In addition, there is even support for inter-board interconnection: In the future, the industry expects to be able to connect different boards at the package level using co-packaged or even integrated photonics. 

The UCIe interconnect aims to accomplish at the package level what the PCIe interconnect has achieved for decades at the board level. In general, the advantage of interconnecting systems and IP at the level of the package (rather than the board as PCIe does) is to significantly reduce the energy required per bit and improve the bandwidth, both of which can be major bottlenecks.

Intel has donated its AIB PHY to the UCIe standard. However, this doesn’t mean UCIe would work only with Intel’s packaging technology, as the protocol is meant to be package-technology agnostic. Nevertheless, there will be some additional complexity over traditional interconnects given the wide range of packaging technologies that exist, which for example all tend to have different bump pitches. (The bump pitch, the measure of the distance between interconnect bumps, is a rough indicator for the interconnect density and power that can be achieved, similar to the transistor pitch of process nodes.) 

In that regard, UCI 1.0 covers two sets of specifications: one for standard packages (using traditional substrates) and one for advanced packages (using advanced packaging technologies with smaller bump pitches for higher bandwidth). Still, even within one of these two categories, it’s likely that chiplets will be compatible only  if they were designed with the same bump pitch in mind. In one of the most important metrics, energy per bit, both specifications aim to be significantly under the roughly 2 pJ/bit of PCIe, respectively setting 0.5 pJ/bit and 0.25 pJ/bit targets. 

The organization already has plans for further work, which includes the form factor, management, security and other protocols. The protocol also does not support 3D packaging yet.

Additional advantages of chiplets

As indicated, chiplets have various advantages. The advanced packaging technology used to interconnect chiplets uses less energy and has higher bandwidth than board-level interconnects such as PCIe. In some cases, such packaging and are chiplets are actually required, since the maximum size of a chip is limited by something called the reticle size limit during manufacturing. This limits the area of chip to approximately 850mm2. To create larger systems, chips must be interconnected in some way.


Intel, in particular, has been evangelizing its new design approach called die disaggregation (or partitioning). In this methodology, what traditionally would be an SoC is split up into various smaller chiplets or tiles, which yields several additional benefits.

Since a single defect during manufacturing is enough to make a complete chip nonfunctional, yield diminishes quickly with increasing die area, which increases cost. Hence, die disaggregation tends to result in much lower cost and has been a major reason for its use in chips such as AMD’s high-end Ryzen and Epyc CPUs and Intel’s Ponte Vecchio GPU. 

In addition, die disaggregation offers the flexibility to use different manufacturing processes in a single design, and potentially faster time to market. Essentially, the goal is to disaggregate the SoC back into its individual IP blocks to get the described flexibility and cost benefits, while maintaining performance and power as if it was a monolithic chip.

These individual chiplet building blocks can then be reused across a vendor’s portfolio. A vendor could, for example, decide to leave some less critical chiplets in a system on a trailing edge node, while only moving the most crucial IP to the latest process technology. Alternatively, just a single chiplet could be swapped out for another chiplet, perhaps with wholly different functionality. In both cases, the system would be improved without having to revalidate or redesign what traditionally would have been an entirely new SoC (with its mask set in the fab). 

As an example, since 2017 Intel has been building its own chiplet ecosystem with its FPGAs, proliferating its portfolio of FPGAs over time by mixing and matching different chiplet building blocks that are attached to one of several base FPGA dies. In one example, Intel launched a new FPGA with a PCIe 4.0 chiplet several years after its initial launch as an upgrade over the prior PCIe 3.0 support. In another example, some transceiver chiplets were swapped in order for HBM memory to be attached to those chiplet “slots” instead. 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.