Lawrence Livermore National Laboratory (LLNL) said it has integrated Cerebras Systems‘ new product, which the company claims is the world’s largest computer chip, into its Lassen supercomputer for the National Nuclear Security Administration.

Technicians recently connected the Silicon Valley-based company’s massive, 1.2 trillion transistor Wafer Scale Engine (WSE) chip — designed specifically for machine learning and AI applications — to the 23-petaflop Lassen. The new wafer-sized computer chip will help accelerate AI research at LLNL. The National Nuclear Security Administration tests the nation’s stockpiles of nuclear weapons using the IBM and Nvidia-based Sierra system, to which Lassen is the unclassified companion. It is No. 14 on the latest Top 500 List of the world’s most powerful supercomputers.

Normally, chip makers slice a wafer from a 12-inch diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware.

But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep it all functioning at high speeds so the transistors all work together as one.

Cerebras WSE has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel’s first 4004 processor in 1971 had 2,300 transistors, and a recent Advanced Micro Devices processor has 32 billion transistors.

LLNL said the computer will enable researchers to investigate novel approaches to predictive modeling. Users gained access to the system in July and have begun work on initial AI models.

Early applications include fusion implosion experiments performed at the National Ignition Facility, materials science and rapid design of new prescription drugs for COVID-19, and cancer treatment (through the Accelerating Therapeutic Opportunities in Medicine, or ATOM project).

The CS-1 system runs on the WSE chip, which consists of 400,000 AI-optimized cores, 18 gigabytes of on-chip memory, and 100 petabits per second of on-chip network bandwidth. The upgrade marks the first time LLNL has a high-performance computing (HPC) resource that has included AI-specific hardware. Effectively it is the world’s first computer system designed for “cognitive simulation” (CogSim), a term used by LLNL scientists to describe the combination of traditional HPC simulations and AI techniques.

Combining the CS-1 with Lassen allows LLNL to explore heterogeneity, where the supercomputer includes computer elements with different specializations that each contribute to working on a given job. This would allow operations such as data generation and error correction to run concurrently, resulting in a more integrated, efficient, and cost-effective solution to scientific problems, according to lab researchers.

Above: Cerebras packs 1.2 trillion transistors on a wafer.

Cerebras says the WSE processor is optimized for deep learning. It is 56 times larger than the largest graphics processing unit (GPU) and contains 78 times more compute cores. The WSE has 3,000 times more on-chip memory and more than 10,000 times more memory bandwidth than GPUs.

LLNL researchers said that their preliminary work focuses on learning from up to 5 billion simulated laser implosion images to optimize fusion targets for experiments at the National Ignition Facility (NIF), with the goal of reaching high energy output and robust fusion implosions for stockpile testing.

To ensure successful integration, LLNL and Cerebras are working in an AI center of excellence in the coming years. Depending on the results, LLNL could add more CS-1 systems, both to Lassen and to other supercomputing platforms.