Presented by Intel
Moore’s Law has been the guiding principle for the semiconductor industry for more than fifty years. For thirty of those years, I have had the privilege of working in Intel’s technology development organization — giving me a bird’s eye view of the breakthrough innovations that have enabled continued improvements in transistor density, performance, and energy efficiency. While there are many voices today predicting the imminent demise of Moore’s Law, I couldn’t disagree more. I believe the future is brighter than ever, with more innovative technology options in the pipeline now than I have seen at any point in my career.
At its simplest level, Moore’s Law refers to a doubling of transistors on a chip with each process generation. Over the years, this exponential increase in transistor density has remained remarkably consistent, but two things have changed along the way: how we achieve these density increases and the benefits we derive at the product level. Whether its higher frequencies and lower power consumption or more functionality integrated on a chip, Moore’s Law has adapted and evolved to meet the demands of every technology generation from mainframes to mobile phones. This evolution will continue as we move into a new era of unlimited data and artificial intelligence.
What innovations will drive Moore’s Law over the next decade? I believe they can collectively be categorized into two broad areas: monolithic scaling and system scaling. Monolithic scaling might be referred to as “classic” Moore’s Law scaling, with a focus on reducing transistor feature sizes and operating voltages while increasing transistor performance. System scaling improvements are the gains that help us incorporate new types of heterogeneous processors via advances in chiplets, packaging, and high-bandwidth chip-to-chip interconnect technologies.
Intel is investing heavily in research to support both vectors. At the recent annual gathering of the world’s top semiconductor process technologists — IEDM in San Francisco — Intel engineers presented nearly twenty papers demonstrating groundbreaking work to advance Moore’s Law for the next generation. What follows is a high-level summary of these exciting technology options.
Monolithic scaling: A new dimension
Current Intel processors are based on a transistor structure known as FinFET, in which the gate surrounds the fin-shaped channel on three sides. As Intel’s process nodes have advanced, we made the fins taller and narrower, allowing us to reduce the number of fins necessary to achieve a given level of performance. While FinFETs still have plenty of life, at some point in the near future the industry will transition to a new type of transistor architecture: Gate-All-Around (GAA) FETs, in which the gate wraps around the channel on all sides. There are multiple potential implementations of GAAFETs, from skinny nanowires to wide nanoribbons. What they share in common is the ability to pack more high-performance transistors into a given area, thus reducing the width of the standard cells our designers use to build new processors.
In addition to this new transistor architecture, another way to drive cell area scaling is through vertical stacking of transistor devices. Modern semiconductors are built from complementary pairs of both negatively and positively charged transistors called NMOS and PMOS. The height of a standard cell can be significantly decreased through monolithic stacking of a NMOS device on top of a PMOS device, or vice versa. This can be accomplished by stacking FinFETs, GAAFETs, or even a combination of both.
Monolithic stacking of transistor devices doesn’t just deliver improved density. It is a powerful way to integrate multiple materials on a single silicon substrate, providing significantly improved performance and opening the door to entirely new classes of products with unique functionality. At IEDM, lntel engineers demonstrated two innovative approaches to monolithic integration.
In the first example, our team has stacked a germanium-based GAAFET PMOS device layer on top of a more traditional silicon FinFET NMOS device layer. Germanium is an element with many similar properties as silicon, but it has found limited use in semiconductor chips because it can be challenging to manufacture alongside silicon. However, because of the structure of its crystal lattice, using germanium in the transistor channel can significantly improve the switching speeds of a PMOS device, which typically operates more slowly than its complementary NMOS device. Monolithic processing allowed us to fabricate a germanium-based PMOS device with record-setting performance, and then stack it on top of a silicon-based NMOS device.
In the second example, another team has used monolithic integration to stack a standard silicon PMOS device layer on top of a NMOS device layer that leverages a channel made from gallium-nitride — a compound that is widely recognized as the best material for power delivery and radio frequency (RF) applications, such as next-generation 5G front-end modules. These types of chips are currently built as standalone units, but this new technique could allow for full integration of RF functionality with standard silicon-based processors.
System scaling: Beyond the transistor
Continuing to drive Moore’s Law scaling requires integrating improvements from every aspect of the manufacturing process, not just at the transistor level. For decades, many in the industry viewed packaging as simply the final manufacturing step — the place where we make the electrical connections between the processor and the motherboard. But this has changed dramatically in recent years.
Ten years ago, the emphasis in SoC integration was on implementing GPU and I/O functionality in the same die as a high-performance CPU. In the future, advanced packaging technologies will be used to link different types of processors together, without forcing them to share a single manufacturing material or process node.
This type of dis-integration may seem, at least initially, to be the antithesis of what Moore’s Law is intended to accomplish, but the performance and density improvements gained by matching each type of processor to its own best-fit transistor logic and design implementation often outweigh the negatives caused by separating a monolithic die into smaller chiplets. In fact, in his original paper in 1965, Moore stated that it “may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected.”
Intel has already deployed technologies like EMIB (Embedded Multi-die Interconnect Bridge) and Foveros to connect chiplets in both two and three dimensions, such as placing HBM between CPU and GPU (as in Kaby Lake G, with EMIB), or to connect the 10nm compute die used in Intel’s upcoming Lakefield processor face-to-face with the 22nm I/O die directly below it. We also have plans to combine Foveros and EMIB together, in a technology called Co-EMIB, in which multiple 3D Foveros chips are connected via EMIB, allowing Intel to build chips far larger than the reticle size for any monolithic processor and scale out chip designs much more widely than before.
Intel is already looking ahead past Co-EMIB toward a new standard called Omni-Directional Interconnect. One of the problems with stacking chips on top of each other using existing methods like through-silicon vias is that the amount of power you can push through such tiny wires is limited. ODI uses much thicker vias for power delivery, while offering the same capabilities as Foveros when deployed for 3D face-to-face bonding.
ODI can be used to connect chiplets in a wide variety of configurations, including scenarios in which one die is partially buried and acting as a bridge between two others, completely buried, or even between two slightly overlapped die, with ODI used between them for thicker power pillars, allowing for chips to be packed much more tightly together.
The ability to integrate 3D stacks of processors presents another method for improving silicon density that’s completely decoupled from a “classic,” exclusively transistor-focused concept of Moore’s Law. Traditional monolithic scaling will continue at 7nm with the introduction of EUV, then at 5nm and beyond, but it’s not the sole area where Intel expects to lead with continual, generation-on-generation improvements in both density and performance.
The improvements that will drive future Moore’s Law scaling at Intel aren’t driven solely by process node shrinks or lithography improvements, but by collaboration between multiple engineering teams engaged in different parts of the design process. Here, Intel’s unique status as an integrated device manufacturer (IDM) is an advantage. Because Intel manufacturers its own products, there’s close collaboration between the design teams architecting future iterations of Intel processors and the fab engineers that will build those parts. We have the option to tweak an architecture to better match the capabilities of a process node, or to fine-tune a node to match capabilities we want to deliver in a given architecture.
There’s no denying that we face significant challenges in our industry, but the future of Moore’s Law will be anything but a slow decline into obsolescence. Broadening the scope of how we deliver generational scaling improvements has widened the possible options for delivering them. I’ve never felt as optimistic about the long-term health of Moore’s Law as I do right now.
Robert Chau is Intel Senior Fellow and Director, Components Research.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact email@example.com.