In August 2018, Dell EMC and Intel announced intentions to jointly design Frontera, an academic supercomputer funded by a $60 million grant from the National Science Foundation that would replace Stampede2 at the University of Texas at Austin’s Texas Advanced Computing Center (TACC). Those plans came to fruition in June when the two companies deployed Frontera, which was formally unveiled this morning.

Intel claims that Frontera can achieve peak performance of 38.7 quadrillion floating point operations per second, or petaflops, making it the world’s fastest computer designed for academic workloads like modeling and simulation, big data, and machine learning. (That’s compared with Stampede2’s peak performance of 18 petaflops.) Earlier this year, Frontera earned the fifth spot on the twice-annual Top500 list with 23.5 petaflops on the LINPACK benchmark, which ranks the world’s most powerful non-distributed computer systems.

“The Frontera system will provide researchers computational and artificial intelligence capabilities that have not existed before for academic research,” said Intel vice president and general manager of Intel’s extreme computing organization Trish Damkroger. “With Intel technology, this new supercomputer opens up new possibilities in science and engineering to advance research, including cosmic understanding, medical cures, and energy needs.”

Hundreds of 28-core 2nd Gen Xeon Scalable (Cascade Lake) processors slotted within Dell EMC PowerEdge servers handle Frontera’s heavy computational lifting, alongside Nvidia nodes for single-precision computing. The chips’ architecture is based on Intel’s Advanced Vector Extensions 512 (AVX-512), a set of instructions that enables twice the number of FLOPS per clock compared with the previous generation.

Frontera employs liquid cooling for the majority of its nodes, with Dell EMC supplying water and oil cooling from system integration firm CoolIT and Green Revolution Cooling, and it leverages Mellanox HDR and HDR-100 interconnects to transmit data at speeds of up to 200Gbps per link between the switches that connect its 8,008 nodes. Each node is anticipated to draw around 65 kilowatts of power, roughly a third of which TACC is procuring from wind power credits and wind power production, in addition to solar power.

As for storage, Frontera features four different environments architected by DataDirect Networks, which together total more than 50 petabytes paired with 3 petabytes of NAND flash capable. (That works out to about 480GB of SSD storage per node.) Three are general-purpose in nature, while the fourth boasts “very fast” connectivity of up to 1.5 terabytes per second.

Lastly, Frontera takes advantage of Intel Optane DC persistent memory, a non-volatile memory technology developed by Intel and Micron Technology that’s PIN-compatible with DDR4 and combines large caches with smaller DRAM pool (192GB per node) for improved performance. Paired with the latest generation of Xeon Scalable Processors, Intel pegs Optane DC PM’s performance at 287,000 operations per second (versus a conventional DRAM and storage combo’s 3,164 operations per second), with a restart time of only 17 seconds.

Frontera is already being used by folks like Manuela Campanelli, professor of Astrophysics at Rochester Institute of Technology and director for the center for computational relativity and gravitation, to develop a simulation that might explain the origin of energy bursts emitted during a neutron star merger. Professor George Biros of UT Austin Frontera has tapped it to build bio-physical models of brain tumor development to more effectively diagnose and treat gliomas, a type of brain tumor. And Olexandr Isayev, assistant professor at University of North Carolina at Chapel Hill, is using the system to train an AI model that describes the force fields and potential energy of molecules based on their 3D structure.

Frontera joins more than a dozen advanced computing systems currently deployed at TACC, including Lonestar and Maverick, and it’s expected to operate for five years. Its next phase will involve application-specific accelerators, including quantum simulators and tensor core systems that together deliver a factor of 10 times faster overall compute.

“Frontera will provide scientists across the country with access to unprecedented computational modeling, simulation, and data analytics capabilities,” said NSF assistant director for computer and information science and engineering. “Frontera represents the next step in NSF’s more than three decades of support for advanced computing capabilities that ensure the U.S. retains its global leadership in research frontiers.”

Up to 80% of the available hours on Frontera will be accessible through the NSF Petascale Computing Resource Allocation program, according to TACC.